paper_id
stringlengths
12
48
title
stringlengths
12
155
url
stringlengths
39
46
abstract
stringlengths
389
2.11k
ocr_markdown
stringlengths
18.1k
576k
kim-etal-2023-factkg
{F}act{KG}: Fact Verification via Reasoning on Knowledge Graphs
https://aclanthology.org/2023.acl-long.895
In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verificationvia Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification.
# Fact**Kg: Fact Verification Via Reasoning On Knowledge Graphs** Jiho Kim1, Sungjin Park1, Yeonsu Kwon1**, Yohan Jo**2∗ , James Thorne1**, Edward Choi**1 1KAIST 2Amazon {jiho.kim, zxznm, yeonsu.k, thorne, edwardchoi}@kaist.ac.kr jyoha@amazon.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FACTKG: Fact Verification via Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FACTKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FACTKG over these reasoning types. We believe FACTKG can advance both reliability and practicality in KG-based fact verification.1 ## 1 Introduction The wide spread risk of misinformation has increased the demand for fact-checking, that is, judging whether a claim is true or false based on evidence. Accordingly, recent works on fact verification have been developed with various sources of evidence, such as text (Thorne et al., 2018; Augenstein et al., 2019; Jiang et al., 2020; Schuster et al., 2021; Park et al., 2021) and tables (Chen et al., 2019; Wang et al., 2021; Aly et al., 2021). Unfortunately, knowledge graphs (KG), one of the large-scale data forms, have not yet been fully uti- Figure 1: An example data from FACTKG. To verify the claim whether it is SUPPORTED or REFUTED, we use triples extracted from DBpedia as evidence. lized as a source of evidence. A KG is a valuable knowledge source due to two advantages. Firstly, KG-based fact verification can provide more reliable reasoning: since the efficacy of realworld fact-checking hinges on this reliability, recent studies have focused on justifying the decisions of a fact verification system (Kotonya and Toni, 2020a). In most existing works, the justification is based on the extractive summary of text evidence. Therefore, the inferential links between the evidence and the verdict are not clear (Kotonya and Toni, 2020b; Atanasova et al., 2020a,b). Compared to text and tables, a KG can simply represent reasoning process with logic rules on nodes and edges (Liang et al., 2022). This allows us to categorize common types of reasoning with the graphical structure, as shown in Table 1. Secondly, KG-based fact verification techniques have broad applicability beyond the domain of factchecking. For example, modern dialogue systems (e.g. *Amazon Alexa* (Amazon Staff, 2018), Google Assistant (Kale and Hewavitharana, 2018)) maintain and communicate with internal knowledge graphs, and it is crucial to make sure that their content is consistent with what the user says and oth16190 | Reasoning Type | Claim Example | Graph r2 | | | |------------------|-------------------------------------------------------------|------------|----|----| | One-hop | AIDAstella was built by Meyer Werft. | s | m | | | Conjunction | AIDA Cruise line operated the AIDAstella which was built by | r3 | r2 | | | Meyer Werft. | c | s | m | | | Existence | Meyer Werft had a parent company. | m | r1 | | | r2 | r4 | | | | | Multi-hop | AIDAstella was built by a company in Papenburg. | s | x | p | | r2 | r4 | | | | | Negation | AIDAstella was not built by Meyer Werft in Papenburg. | s | m | p | erwise update the knowledge graphs accordingly. If we model the user's utterance as a claim and the dialogue system's internal knowledge graph as a knowledge source, the process of checking their consistency can be seen as a form of KG-based fact verification task. More generally, KG-based fact verification techniques can be applied to cases which require checking the consistency between graphs and text. Reflecting these advantages, we introduce a new dataset, FACTKG: Fact Verification via Reasoning on Knowledge Graphs, consisting of 108k textual claims that can be verified against DBpedia (Lehmann et al., 2015) and labeled as SUP-PORTED or REFUTED. We generated the claims based on graph-text pairs from WebNLG (Gardent et al., 2017) to incorporate various reasoning types. The claims in FACTKG are categorized into five reasoning types: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FACTKG consists of claims in various styles including colloquial, making it potentially suitable for a wider range of applications, including dialogue systems. We conducted experiments on FACTKG to validate whether graph evidence had a positive effect for fact verification. Our experiments indicate that the use of graphical evidence in our model resulted in superior performance when compared to baselines that did not incorporate such evidence. ## 2 Related Works 2.1 Fact Verification And Structured Data There are various types of knowledge used in fact verification such as text, tables, and knowledge graphs. Research on fact verification has mainly focused on text data as evidence (Thorne et al., 2018; Augenstein et al., 2019; Jiang et al., 2020; Schuster et al., 2021; Park et al., 2021). FEVER (Thorne et al., 2018), one of the representative fact verification datasets, is a large-scale manually annotated dataset derived from Wikipedia. Other recent works leverage ambiguous QA pairs (Park et al., 2021), factual changes (Schuster et al., 2021), multiple documents (Jiang et al., 2020), or claims sourced from fact checking websites (Augenstein et al., 2019). Fact verification on table data is also studied (Chen et al., 2019; Wang et al., 2021; Aly et al., 2021). Table-based datasets such as SEMTAB-FACTS (Wang et al., 2021) or TabFact (Chen et al., 2019) require reasoning abilities over tables, and FEVEROUS (Aly et al., 2021) validate claims utilizing table and text sources. We refer the reader to Guo et al. (2022) for a comprehensive survey. There have been several tasks that utilize knowledge graphs (Dettmers et al., 2018). For example, FB15K (Bordes et al., 2013), FB15K237 (Toutanova and Chen, 2015), and WN18 (Bordes et al., 2013) are built upon subsets of largescale knowledge graphs, Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) respectively. These datasets only use a single triple as a claim, and thus the claims only require One-hop reasoning. However, FACTKG is the first KG-based fact verification dataset with natural language claims that require complex reasoning. In terms of the evidence KG size, FACTKG uses the entire DBpedia (0.1B triples), which is significantly larger than previous datasets (FB15K: 592K, FB15K-237: 310K, WN18: 150K). ## 2.2 Webnlg As constructing a KG-based fact verification dataset requires a paired text-graph corpus, we utilized WebNLG as a basis for FACTKG. WebNLG ![2_image_0.png](2_image_0.png) is a dataset for evaluating triple-based natural language generation, which consists of 25,298 pairs of high-quality text and RDF triples from DBpedia. WebNLG contains diverse forms of graphs and the texts are created by linguistic experts, which gives it great variety and sophistication. In the 2020 challenge2, the dataset has been expanded to 45,040 text-triples pairs. We used this 2020 version of WebNLG when constructing our dataset. ## 3 Data Construction Our goal is to diversify the graph reasoning patterns and linguistic styles of the claims. To achieve this, we categorize five reasoning types of claims: One-hop, Conjunction, Existence, Multi-hop, and Negation. Our claims are generated by transforming the sentences in Sw, a subset of WebNLG's text-graph pairs (Section 3.1).3 Next, we also diversified the claims with colloquial style transfer and presupposition (Section 3.2). ## 3.1 Claim Generation 3.1.1 One-Hop The most basic type of claim is one-hop, which covers only one knowledge triple. One-hop claims can be verified by checking the existence of a single corresponding triple. In the second row of Table 1, the claim is SUPPORTED when the triple (*AIDAstella*, ShipBuilder, *Meyer Werft*) exists. We take the sentences that consist of a single triple in Sw as SUPPORTED claims. REFUTED claims are created by substituting SUPPORTED claims in two ways: *Entity substitution* and *Relation substitution*. In *Entity substitution*, we replace an entity e in SUPPORTED claim C with another entity e˜ of the same entity type. In order to ensure that the label of the substituted sentence C˜ is RE-FUTED, the entity e˜ should satisfy the following two conditions. i) To select e˜ that is irrelevant to C, e˜ is outside 4-hops from all entities in C on DBpedia, ii) the results of NLI (C, C˜) and NLI (C˜, C) are both CONTRADICTION. 4In *Relation substitution*, we replace a relation in the SUPPORTED claim with another relation. We replace the relation of a triple in the claim with another relation that takes the same entity types for the head and tail as the original relation (e.g. currentTeam ↔ formerTeam). The four groups of compatible relations are listed in Table 6. The overall process of the substitution methods is illustrated in Figure 2. ![3_image_0.png](3_image_0.png) ## 3.1.2 Conjunction A claim in the real world can include a mixture of different facts. To incorporate this, we construct a conjunction claim composed of multiple triples. Conjunction claims are verified by the existence of all corresponding triples. In the third row of Table 1, the claim can be divided into two parts: "AIDA Cruise line operated the AIDAstella." and "AIDAstella was built by Meyer Werft.". The claim is SUPPORTED when all the triples (*AIDAstella*, ShipOperator, *AIDA Cruises*), (AIDAstella, ShipBuilder, *Meyer Werft*) exist. To implement this idea, we extracted sentences consisting of more than one triple from Sw and used them as the SUPPORTED claims. To create REFUTED claims, we use Entity substitution method on these SUPPORTED claims. ## 3.1.3 Existence People may make claims that assert the existence of something (e.g. *"She has two kids."*). From the view of a triple, this corresponds to the head or tail missing. To reflect this scenario, we formulate a claim by extracting only {head, *relation*} or {tail, *relation*} from a triple. Existence claims are generated using templates and they are divided into two categories: *head-relation* (e.g. template: {*head*} had a(an) {*relation*}.) and *tailrelation* (e.g. template: {*tail*} was a {*relation*}.). SUPPORTED claims are constructed by randomly extracting {head, *relation*} or {tail, *relation*} in triples from Sw. The REFUTED claims are constructed using the same type of entities as represented in the claim, but with different relations. However, it is possible that unrealistic claims may be generated in this manner. For example, "Meyer Werft had a location." or *"Papenburg was a location."* can be created from the triple (*Meyer Werft*, location, *Papenbug*). Hence, we selected 22 relations out of all relations that lead to realistic claims. Templates used for both categories and examples of generated claims are in Table 7. ## 3.1.4 Multi-Hop We also consider multi-hop claims that require the validation of multiple facts where some entities are underspecified. Entities in this claim can be connected by a sequence of relations. For example, the multi-hop claim in Table 1 is SUPPORTED if the triple (AIDAstella, *ShipBuilder*, x) and the triple (x, location, *Papenburg*) are present in the graph. The goal is to verify the existence of a path on the graph that starts from *AIDAstella* and reaches *Papenburg* through the relations *ShipBuilder* and *location*. Figure 3 shows how a SUPPORTED multi-hop claim CM can be generated by replacing an entity e of the conjunction claim C with its type name. First, an entity e is selected from the green nodes. Then, the type name t of the entity e is extracted from DBpedia. However, each entity e in DBpedia has several types T = {t1, t2*, ..., t*N }, and it is not annotated which type is relevant when e is used in a claim. So it is necessary to select one of them. For each tn ∈ T, we insert it next to the entity e in the claim C and measure the perplexity score of the modified claim using GPT2-large (Radford et al., 2019). Then we replace e in the claim with the type name that had the lowest score. The REFUTED claim is generated by applying *Entity substitution* to the SUPPORTED claim. ## 3.1.5 Negation For each of the four methods for generating claims, we develop claims that incorporate negations. One-hop We use the Negative Claim Generation Model (Lee et al., 2021) which was fine-tuned on the opposite claim set in the WikiFactCheckEnglish dataset (Sathe et al., 2020).5 To ensure the quality of the generated sentences, we generate 100 opposing claims for each original claim, then only use those that preserve all entities, and contain negations (e.g. 'not' or *'never'*). Also, similar to Entity substitution method, we only use sentences whose NLI relation with the original sentences are CONTRADICTION bidirectionally. When a negation is added, the label of the generated claim is reversed from the original claim. Conjunction The use of negations (i.e., *'not'*) in various positions within conjunction claims allows the generation of a wide range of negative claim structures. We employ the pretrained language model GPT-J 6B (Wang and Komatsuzaki, 5WikiFactCheck-English consists of pairs of claims, with a positive claim and its corresponding negative claim. 2021) to attach negations to the claim. We construct 16 in-context examples, each with negations attached to the texts corresponding to the first or/and second relation. When a negation is added to the SUPPORTED claims, all the claims become RE-FUTED. However, when it is added to REFUTED claims, the label depends on the position of the negation. When negations are added to all parts with substituted entities, it becomes a SUPPORTED claim. Conversely, other cases preserve the label REFUTED since the negation is added to a place that is not related to entity substitution. A detailed labeling strategy is described in Appendix D.1. Existence The claim is formulated by adding a negation within the templates presented in Section 3.1.3 (e.g. {*tail*} was not a {*relation*}.). Multi-hop A claim is formulated using the GPT-J with in-context examples, similar to conjunction. The truth of this claim is dependent on the presence of a distinctive path that matches the claim's intent. For example, the negative claim "AIDAstella was built by a company, not in Papenburg." is SUPPORTED if x exists where the triples (AIDAstella, *ShipBuilder*, x) and (x, *location*, y) are in DBpedia and y is not *Papenburg*. A more detailed labeling strategy is in Appendix D.2. ## 3.2 Colloquial Style Transfer We transform the claims into a colloquial style via style transfer using both a fine-tuned language model and presupposition templates. ## 3.2.1 Model Based Using a similar method proposed by Kim et al. (2021), we transform the claim obtained from 3.1 into a colloquial style. For example, the claim "Obama was president." is converted to *"Have you* heard about Obama? He was president!". We train FLAN T5-large (Chung et al., 2022) to generate a colloquial style sentence given a corresponding written style sentence from Wizard of Wikipedia (Dinan et al., 2019). However, using sentences generated by the model could have several potential issues: i) the original and generated sentences are lexically the same, ii) some entities are missing in the generated sentences, *iii)* the generated sentences deviate semantically from the original, iv) the generated sentences lack a colloquialism, as mentioned in Kim et al. (2021). To overcome this, we oversample candidate sentences and utilize an additional filtering process. First, to make more diverse samples using the model, we set the temperature to 20.0 and generate 500 samples with beam search. i) To avoid generated sentences that are too similar to the original sentences, only sentences with an edit distance of 6 or more from the original sentence are selected among 500 samples. ii) Then, only those that have verbs and the named entities all preserved are selected.6*iii)* Finally, we use bidirectional NLI to preserve the original semantics. Candidate sentences survive when NLI (O, G) is ENTAILMENT and NLI (G, O) is not CONTRADICTION where O refers to the original sentence and G the generated sentence. On average, only 41.2 generated sentences survived out of 500 samples. Additionally, in cases where none of the 500 generated sentences pass the filtering process, we include the original claim in the final dataset as a written style claim. Following the filtering process, the AFLITE method (Sakaguchi et al., 2019), which utilizes adversarial filtering, is applied to select the most colloquial style sentence among the surviving sentences. We include the selected claim in the final dataset as a colloquial style claim. ## 3.2.2 Presupposition A presupposition is something the speaker assumes to be the case prior to making an utterance (Yule and Widdowson, 1996). People often communicate under the presupposition that their beliefs are universally accepted. We construct claims using this form of utterance. The claims in FACTKG are focused on three types of presupposition: factive, non-factive, and structural presuppositions. Factive Presupposition People frequently use verbs like "realize" or *"remember"* to express the truth of their assumptions. The utterance *"I remembered that* {Statement}." assumes that {Statement} is true. Reflecting these features, a new claim is created by appending expressions that contain presupposition (e.g. *"I realized that" or "I* wasn't aware that") to the existing claim. We used eight templates to make factive presupposition claims: the details are appended in Table 8. Non Factive Presupposition The verbs such as *"wish"* are commonly used in utterances that describe events that have not occurred. For example, people say *"I wish that* {Statement}." when {Statement} did not happen. Claims that are created by the non-factive presupposition method are labeled as the opposite of the original one. We used three templates to make these claims: the templates are appended in Table 8. Structural Presupposition This type is in the form of a question that presumes certain facts. We treat the question itself as a claim. For example, *"When was Messi in Barcelona?"* assumes that *Messi was in Barcelona*. To create a natural sentence form, only claims corresponding to one-hop and existence are constructed. For the one-hop claim, a different template was created corresponding to each relation reflecting its meaning (e.g. "When did {*head*} die from {*tail*}?" for the relation *deathCause* and "When was {*head*} directed by {*tail*}?" for relation *director*). Existence claims are also generated based on templates (e.g. "When was {tail} {*relation*}?") using pairs of head-relation or tail-*relation*, similar to Section 3.1.3. The templates used are described in Table 9. ## 3.3 Quality Control To evaluate the quality of our dataset, the labeling strategy and the output of the colloquial style transfer model are assessed. Labeling Strategy When SUPPORTED claims are made in the manner described in Section 3.1, the labeling is straightforward, as all have precise evidence graphs. However, REFUTED claims are generated by random substitution, so there might be a small chance that they remain SUP-PORTED (e.g. *"The White House is in Washington,* D.C." to *"The White House is in America."*). To evaluate this substitution method, randomly sampled 1,000 substituted claims were reviewed by two graduate students. As a result, 99.4% of generated claims were identified as REFUTED by both participants. Colloquial Style Transfer Model We also evaluate the quality of the colloquial style claims generated by the model. A survey was conducted on all claims in the test set by three graduate students. As a result, only 9.8% of the claims were selected as *Loss of important information* by at least two reviewers. In addition, to ensure the quality of the test set, only claims that were selected as All facts are preserved by two or more reviewers are included in the test set. The survey details are in Appendix E. | Type | Written | Colloquial | Total | | |------------|-----------|--------------|---------|---------| | Model | Presup | | | | | One-hop | 2,106 | 15,934 | 1,580 | 19,530 | | Conjuction | 20,587 | 15,908 | 602 | 37,097 | | Existence | 280 | 4,060 | 4,832 | 9,172 | | Multi-hop | 10,239 | 16,420 | 603 | 27,262 | | Negation | 1,340 | 12,466 | 1,807 | 15,613 | | Total | 34,462 | 64,788 | 9,424 | 108,674 | Table 2: Dataset statistics of FACTKG for all reasoning types. ## 4 Experiments 4.1 Dataset Statistics Table 2 shows the statistics of FACTKG. We split the claims into train, dev, and test sets with a proportion of 8:1:1. We ensured that the set of triples in each split is disjoint with the ones in other splits. ## 4.2 Experimental Setting We publish FACTKG with sets of claims, graph evidence and labels. The graph evidence includes entities and a set of relation sequences connected to them. For instance, when the claim is given as "AIDAstella was built by a company in Papenburg.", the entity *'AIDAstella'* corresponds to a set of relation sequence [shipBuilder, *location*] and *'Papenburg'* corresponds to [∼location, ∼*shipBuilder*].7 In the test set, we only provide entities as graph evidence. ## 4.3 Baseline We conduct experiments on FACTKG to see how the graphical evidence affects the fact verification task. To this end, we divided our baselines into two distinct categories based on the input type, Claim Only and *With Graphical Evidence*. ## 4.3.1 Claim Only In the *Claim Only* setting, the baseline models receive only the claim as input and predict the label. We used three transformer-based text classifiers, BERT, BlueBERT, and Flan-T5. BERT (Devlin et al., 2018) is trained on Wikipedia from which DBpedia is extracted. So we expect that the model will use evidence memorized in its pretrained weights (Petroni et al., 2019) or exploit structural patterns in the generated claims (Schuster et al., 2019; Thorne and Vlachos, 2021). BlueBERT (Peng et al., 2019) is trained on biomedical corpus, such as Pubmed abstracts. We use 7'∼' indicates that the direction of the relation is reversed 16195 ![6_image_0.png](6_image_0.png) BlueBERT as a comparator for BERT since it has never seen Wikipedia during its pre-training. FlanT5 (Chung et al., 2022) is an enhanced version of T5 (Raffel et al., 2022) encoder-decoder that has been fine-tuned in a mixture of 1.8K tasks. In all experiments, we fine-tune BERT and BlueBERT on our training set. Different from BERT and BlueBERT, we use Flan-T5 in the zero-shot setting. For this setting, we use *"Is this claim True or False?* Claim: " as the prefix. Then, we measure the probability that tokens *True* and *False* will appear in the output. Among the two tokens, we choose the one with the higher probability. ## 4.3.2 With Graphical Evidence In the *With Graphical Evidence* setting, the model receives the claim and graph evidence as input and predicts the label. The baseline we used is a framework proposed by GEAR (Zhou et al., 2019) that enables reasoning on multiple evidence texts. Since GEAR was originally designed to reason over text passages, we change components to suit KG. The modified GEAR consists of the subgraph retrieval module and the claim verification module. The pipeline of the modified GEAR is illustrated in Figure 4. Subgraph retrieval We replace document retrieval and sentence selection in GEAR with subgraph retrieval. To retrieve graphical evidence, we train two independent BERT models, namely a relation classifier and a hop classifier. The relation classifier predicts the set of relations R from the claim c and the entity e. The hop classifier is designed to predict the maximum number of hops n to be traversed from e. We take the subgraph of G that are composed only of the relations in R and where the terminal nodes are entities in C and less than n hops apart from e, allowing for duplicates and considering the order. By traversing the knowledge graph starting from e along the relation sequences in P, we choose the paths that can reach another entity that appears in the claim. If none of the paths is reachable to other entities, then we randomly choose one of the paths. The strategy we used enables the model to retrieve supported evidence and counterfactual evidence for the given claim. The following example is presented to assist the understanding of our subgraph retrieval method. The example claim in Section 4.2 consists of two entities, *'AIDAstella'* and *'Papenburg'*. In this setting, the hop classifier must predict 2 since those entities are connected by a sequence of two relations, namely *shipBuilder* and *location*. In addition, the relation classifier must predict correctly predict those two relations. After that, we find all 2-hop paths starting from *'AIDAstella'* along the predicted relations in the knowledge graph. If there is a path that reaches *'Papenburg'*, we can use it as supporting evidence. If not, however, we randomly select a path. Fact verification We directly employed the claim verification in GEAR and applied some changes to suit the KG setting. Since our evidence is a set of graph paths, we converted them to text by concatenating each triple with the special token <SEP>. We also found that ERNet in GEAR is identical | Input Type | Model | One-hop | Conjunction | Existence | Multi-hop | Negation | Total | |---------------|----------|-----------|---------------|-------------|-------------|------------|---------| | BERT | 69.64 | 63.31 | 61.84 | 70.06 | 63.62 | 65.20 | | | Claim Only | BlueBERT | 60.03 | 60.15 | 59.89 | 57.79 | 58.90 | 59.93 | | Flan-T5 | 62.17 | 69.66 | 55.29 | 60.67 | 55.02 | 62.70 | | | With Evidence | GEAR | 83.23 | 77.68 | 81.61 | 68.84 | 79.41 | 77.65 | Table 3: Fact verification accuracy on FACTKG. to the Transformer encoder, so we replaced it with a randomly initialized Transformer encoder. To make this paper self-contained, we provide further details about the claim verification of GEAR in Appendix F. ## 4.4 Results Fact Verification Results We evaluated the performance of the models in predicting labels and reported the accuracy in Table 3 by different reasoning types. As we expected, GEAR outperforms other baseline models in most of reasoning types because it used graph evidence. Especially, in existence and negation, GEAR substantially outperforms Claim Only baselines. Since the existence claims contain significantly less information than other types, having to search for evidence seems to increase fact verification performance. In addition, negation claims require additional inference steps compared to other types, thus logical reasoning based on graph evidence would help the model make correct prediction. In the multi-hop setting, however, the accuracy of GEAR is lower than BERT, which may be due to the increased complexity of graph retrieval. When entities are far apart with many intermediate nodes being under-specified, it increases the probability of retrieving an incorrect graph. In GEAR, text and evidence paths are concatenated and used as input, so if many incorrect graphs are retrieved, they can lead to incorrect predictions. Also, the accuracy of BERT is the most superior in the multi-hop setting, which suggests that masked language modeling facilitates the model to robustly handle unspecified entities in the multi-hop claims. In the *Claim Only* setting, all baselines outperform the Majority Class (51.35%), and the BERT model shows the highest performance. BlueBERT was pre-trained in the same manner, but BERT shows superior performance due to its pre-trained knowledge from Wikipedia. Table 4: W refers to written style claims and C refers to colloquial style claims. W → C means that the model is trained on the written style claim set and tested on the colloquial style claim set. Flan-T5 is not used in this experiment because we use it only in the zero-shot setting. | Input Type | Model | W → W | W → C C → C | C → W | | |---------------|---------|---------|---------------|---------|-------| | Claim Only | BERT | 71.75 | 63.85 | 68.10 | 69.43 | | BlueBERT | 64.76 | 56.28 | 58.77 | 63.92 | | | With Evidence | GEAR | 81.00 | 75.43 | 80.81 | 78.80 | Cross-Style Evaluation We split the dataset into two disjoint sets, written style and colloquial style. We perform a cross-style fact verification task by using those datasets and the results are reported in Table 4. Initially, we anticipated that using different styles for the train and test set would result in a significant decrease in verification performance. However, contradict our expectation, in C→W setting, BERT and BlueBERT show an improvement in performance over C→C. Even in GEAR, the performance score only dropped slightly. Therefore, the results demonstrate that colloquial style is constructed in various forms which can be beneficial for generalization. ## 5 Conclusion In this paper, we present FACTKG, a new dataset for fact verification using knowledge graph. In order to reveal the relationship between fact verification and knowledge graph reasoning, we generated claims corresponding to a certain graph pattern. Additionally, FACTKG also includes colloquialstyle claims that are applicable to the dialogue system. Our analysis showed that the claims in our dataset are difficult to solve without reasoning over the knowledge graph. We expect the dataset to offer various research directions. One possible use of our dataset is as a benchmark for justification prediction. Most research on this task generate a text passage as justification, yet this approach lacked a gold reference. On the contrary, the interpretability of the knowledge graph allows us to employ it as an explanation for the verdict, such as question answering in the medical domain where explainability is important. Furthermore, using the KG structure for the claim generation allows us to generate a dataset with more complex multi-hop reasoning by design without relying on annotator creativity. ## Limitations Since WebNLG is derived from 2015-10 version of DBpedia, FACTKG does not reflect the latest knowledge. Also, another limitation of our work is that the claims of FACTKG are constructed based on single sentences, like other crowdsourced fact verification datasets. If the claim is generated by more than one sentences, the dataset will be more challenging. We remain this challenging point as a future work. ## Acknowledgements This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant (No.2019-000075, No.2022-0-00984), and National Research Foundation of Korea (NRF) grant (NRF2020H1D3A2A03100945), funded by the Korea government (MSIT). ## References Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information. arXiv preprint arXiv:2106.05707. Amazon Staff. 2018. How alexa keeps getting smarter. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020a. A diagnostic study of explainability techniques for text classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3256–3274, Online. Association for Computational Linguistics. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020b. Generating fact checking explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7352–7364, Online. Association for Computational Linguistics. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. Multifc: A real-world multi-domain dataset for evidencebased fact checking of claims. arXiv preprint arXiv:1909.03242. Steven Bird, Ewan Klein, and Edward Loper. 2009. *Natural language processing with Python: analyzing text* with the natural language toolkit. " O'Reilly Media, Inc.". Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A largescale dataset for table-based fact verification. *arXiv* preprint arXiv:1909.02164. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In *Proceedings of the AAAI* conference on artificial intelligence, volume 32. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-powered conversational agents. In *Proceedings of the International Conference on Learning Representations (ICLR)*. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In *Proceedings* of the 10th International Conference on Natural Language Generation, pages 124–133. Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10:178–206. Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Singh, and Mohit Bansal. 2020. Hover: A dataset for many-hop fact extraction and claim verification. *arXiv preprint arXiv:2011.03088*. Ajinkya Gorakhnath Kale and Sanjika Hewavitharana. 2018. Knowledge graph construction for intelligent online personal assistant. US Patent App. 15/238,679. Byeongchang Kim, Hyunwoo Kim, Seokhee Hong, and Gunhee Kim. 2021. How robust are fact checking systems on colloquial claims? In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1535–1548, Online. Association for Computational Linguistics. Neema Kotonya and Francesca Toni. 2020a. Explainable automated fact-checking: A survey. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 5430–5443, Barcelona, Spain (Online). International Committee on Computational Linguistics. Neema Kotonya and Francesca Toni. 2020b. Explainable automated fact-checking for public health claims. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 7740–7754, Online. Association for Computational Linguistics. Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, and Kyomin Jung. 2021. Crossaug: A contrastive data augmentation method for debiasing fact verification models. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM '21. Association for Computing Machinery. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, and Fuchun Sun. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41. Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Faviq: Fact verification from information-seeking questions. arXiv preprint arXiv:2107.02153. Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In *Proceedings of the 2019 Workshop on Biomedical Natural Language Processing* (BioNLP 2019), pages 58–65. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. *arXiv* preprint arXiv:1907.10641. Aalok Sathe, Salar Ather, Tuan Manh Le, Nathan Perry, and Joonsuk Park. 2020. Automated fact-checking of claims from wikipedia. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6874–6882, Marseille, France. European Language Resources Association. Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with contrastive evidence. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 624–643, Online. Association for Computational Linguistics. Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, and Regina Barzilay. 2019. Towards debiasing fact verification models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3419–3425, Hong Kong, China. Association for Computational Linguistics. James Thorne and Andreas Vlachos. 2021. Elastic weight consolidation for better bias inoculation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 957–964, Online. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In *Proceedings of the 3rd Workshop on* Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics. Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/ mesh-transformer-jax. Nancy XR Wang, Diwakar Mahajan, Marina Danilevsky, and Sara Rosenthal. 2021. Semeval2021 task 9: Fact verification and evidence finding for tabular data in scientific documents (sem-tab-facts). *arXiv preprint arXiv:2105.13995*. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. G. Yule and H.G. Widdowson. 1996. *Pragmatics*. Oxford Introduction to Language Study ELT. OUP Oxford. Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and reasoning for fact verification. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 892–901, Florence, Italy. Association for Computational Linguistics. ## A Qualitative Analysis We report claims and the retrieved graphical evidence in Table A. We also report the correctness of the prediction of GEAR at the first column of our table, **Result**. We used subgraph retrieval to retrieve graph path visualize one of them. By checking the retrieved evidence, We can recognize why the model verdict the claims as refuted or supported. This shows that our graph evidence is fully interpretable. ## B Relation Substitution The four groups of compatible relations are listed in Table 6. ## C Full List Of Templates C.1 Existence The templates to generate existence claims are described in Table 7. ## C.2 Factive And Non Factive Presupposition Factive and Non Factive presupposition templates are in Table 8. ## C.3 Structural Presupposition Structural presupposition templates are in Table 9. ## D Negation Labeling D.1 Conjunction When the negation is added to REFUTED claims, the label depends on the position of the negation. If negations are added to all parts with substituted entities, it becomes a SUPPORTED claim. Conversely, other cases preserve the label REFUTED since the negation is added to a place that is not related to entity substitution. Detailed examples are described in Table 10 and Table 11. ## D.2 Multi-Hop The truth of this claim is dependent on the presence of a distinctive path that matches the claim's intent. For example, when verifying the claim in the fourth row of the Table 12, we check the existence of an entity which is connected to *'AIDAstella'* with relation *builder* and not connected to *'New York'* with relation *location*. ## E Colloquial Style Claim Survey A total of 9 graduate students participated in the survey to evaluate how much information was lost in the colloquial style claim compared to original claim. Since each person has different criteria for 'important information', the labels are divided into five rather than two. The labels are as follows, i) All facts are preserved, ii) Minor loss of information or minor grammatical errors, iii) Ambiguous whether the lost information is important, iv) It is ambiguous, but the lost information may be important, v) Loss of important information. And as a result, only 9.8% of the claims were selected as v) Loss of important information by at least two reviewers. ## F Details Of Gear To make this paper self-contained, we recall some details of the claim verification in GEAR (Zhou et al., 2019). The authors of GEAR (Zhou et al. (2019)) used sentence encoder to obtain representations for the claim and the evidence. Then they built a fully-connected evidence graph and used evidence reasoning network (ERNet) to propagate information between evidence and reason over the graph. Finally, they used an evidence aggregator to infer the final results. ## Sentence Encoder Given an input sentence, Zhou et al. (2019) employed BERT (Devlin et al., 2018) as a sentence encoder by extracting the final hidden state of the [CLS] token as the representation. Specifically, given a claim c and N pieces of retrieved evidence {e1, e2*, ..., e*N }, they fed each evidence-claim pair (ei, c) into BERT to obtain the evidence representation ei. they also fed the claim into BERT alone to obtain the claim c. That is, $$\begin{array}{l}{{\mathbf{e}_{i}=\mathrm{BERT}\left(e_{i},c\right),}}\\ {{\mathbf{c}=\mathrm{BERT}\left(c\right).}}\end{array}\tag{1}$$ ## Evidence Reasoning Network Let h t = {h t1 , h t2 , ..., h t N } denote the hidden states of the nodes in layer t, where h t i ∈ RF ×1and F is the number of features in each node. The initial hidden state of each evidence node h 0 i was initialized by the evidence: h 0 i = ei. The authors proposed an Evidence Reasoning Network (ERNet) to propagate information among the evidence nodes. They first used an MLP to calculate the attention | Result | Claim | Retrieved Path | |---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| | Correct | Yeah! Alfredo Zitarrosa died in a city in Uruguay | (Uruguay, country, Montevideo, deathPlace, Alfredo_Zitarrosa) | | I have heard that Mobyland had a successor. | (Mobyland, successor, "Aero 2") | | | Wrong | I realized that a book was written by J. V. Jones and has the OCLC number 51969173 (J._V._Jones, author, A_Cavern_of_Black_Ice, 'oclc', "39456030") | | Table 5: Examples of claims in FACTKG and retrieved graph path. | Group number | Head type | Tail type | Relation set | |----------------|-------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------| | 1 | person | person | [child, children], [successor], [parent], [predecessor, precededBy], [spouse], [vicePresident, vicepresident], [primeminister, primeMinister] | | 2 | person | team | [currentteam, currentclub, team], [debutTeam, formerTeam] [chairperson, chairman, leader, leaderName], [manager], [founder], | | 3 | non-person | person | [director], [crewMembers], [producer], [discoverer], [creator], [editor], [writer], [coach], [starring], [dean] | | 4 | non-person | non-person | [owningCompany, parentCompany, owner], [headquarter], [builder] | Table 6: Group information of *Relation Substitution*. coefficients between a node i and its neighbor j (j ∈ Ni), where W ∈ RC×F and b ∈ RC×1are parameters, and C is the number of prediction labels. pij = Wt−1 1(ReLU(Wt−1 0(h t−1 i∥h t−1 j))), (2) where Ni denotes the set of neighbors of node i, Wt−1 0 ∈ RH×2F and Wt−1 1 ∈ R1×H are weight matrices, and ·∥· denotes the concatenation operation. Then, they normalized the coefficients using the softmax function, $$\alpha_{i j}=\mathrm{softmax}_{j}(p_{i j})={\frac{\exp(p_{i j})}{\sum_{k\in{\mathcal{N}}_{i}}\exp(p_{i k})}}.$$ . (3) Finally, the normalized attention coefficients were used to compute a linear combination of the neighbor features and thus obtained the features for node i at layer t, $$\mathbf{h}_{i}^{t}=\sum_{j\in{\mathcal{N}}_{i}}\alpha_{i j}\mathbf{h}_{j}^{t-1}.$$ j. (4) The authors fed the final hidden states of the evidence nodes {h T 1 , h T 2 , ..., h T N } into their evidence aggregator to make the final inference. ## Evidence Aggregator The authors employed an evidence aggregator to gather information from different evidence nodes and obtained the final hidden state o ∈ RF ×1. We used the mean aggregator in GEAR. The mean aggregator performed the *elementwise* Mean operation among hidden states. o = Mean(h T 1, h T 2*, ...,* h T N ). (5) Once the final state o is obtained, the authors employed a one-layer MLP to get the final prediction l. l = softmax(ReLU(Wo + b)), (6) | Type | Relation | Template | Example sentences | |----------------------------------------------------------------------------------|------------------------------------------|--------------------------------------|------------------------| | successor, spouse, children, parentCompany, capital, garrison, nickname, mascot, | {Head} had a(an) {Relation}. | Obama had a spouse. | | | youthclubs, predecessor, child, precededBy, religion, awards, award | {Head} did not have a(an) {Relation}. | Apple did not have a parent company. | | | Head-Relation college, university | {Head} attended {Relation}. | Obama attended university. | | | {Head} did not attend {Relation}. | Obama did not attend college. | | | | Tail-Relation | president, primeMinister, vicepresident, | {Tail} was a {Relation}. | Obama was a president. | | primeminister, vicePresident | {Tail} was not a {Relation}. | Obama was not a vice president. | | | Presupposition type | Template | Claim Example | | |---------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------| | I forgot that {claim}. | I forgot that Obama was president. | | | | I realized that {claim}. | I realized that Obama was president. | | | | I wasn't aware that {claim}. | I wasn't aware that Obama was president. | | | | I didn't know that {claim}. | I didn't know that Obama was president. | | | | I remembered that {claim}. | I remembered that Obama was president. | | | | I explained that {claim}. | I explained that Obama was president. | | | | I emphasized that {claim}. | I emphasized that Obama was president. | | | | I understand that {claim}. | I understand that Obama was president. | | | | Factive | I imagined that {claim}. | I imagined that Obama was president. | | | Non Factive | I wish that {claim}. | I wish that Obama was president. | | | If only {claim}. | If only Obama was president. | | | | Table 8: Templates for factive, non factive presupposition. | | | | | Type | Relations | Template | Example claim | | leader, leaderName, mayor, | When was {tail} a {relaion} of {head}? | When was Elizabeth II a leader of Alderney? | | | senators, president, manager, generalManager, coach, chairman, dean team, draftTeam, clubs, | When did {head} play for {tail}? | When did Aaron Boogaard play for Wichita | | | managerClub, managerclubs | Thunder? | | | | operator | When did {tail} operate {head}? | When did Aktieselskab operate Aarhus Airport? | | | occupation, formerName | When was {head} a {tail}? | When was HBO a The Green Channel? | | | almaMater | When did {head} graduate from the {tail}? When did Ab Klink graduate from the Erasmus University Rotterdam? | | | | fossil | When was {tail} fossil found in {head}? | When was Smilodon fossil found in California? | | | director | When was {head} directed by {tail}? | When was Death on a Factory Farm directed by Sarah Teale? | | | producer | When was {head} produced by {tail}? | When was Turn Me On (album) produced by Wharton Tiers? | | | foundation, foundedBy, founder | When was {head} founded by {tail}? | When was MotorSport Vision founded by Jonathan Palmer? | | | deathCause | When did {head} die from {tail}? | When did James Craig Watson die from Peritonitis? | | | creators, creator | When was {head} created by {tail}? | When was April O'Neil created by Peter Laird? | | | starring | When was {head} starring {tail}? | When was Bananaman starring Graeme Garden? | | | shipBuilder, builder | When was {head} built by {tail}? | When was A-Rosa Luna built by Germany? | | | designer | When was {head} designed by {tail}? | When was Atatürk Monument (˙Izmir) designed by Pietro Canonica? | | | shipCountry | When did {head} come from {tail}? | When did ARA Veinticinco de Mayo (V-2) come from Argentina? | | | spouse | When was {head} married to {tail}? | When was Abraham A. Ribicoff married to Ruth Ribicoff? | | | champions | When was {tail} champion at the {head}? | When was Juventus F.C. champion at the Serie A? | | | recordedIn | When was {head} recorded in {tail}? | When was Bootleg Series Volume 1: The Quine Tapes recorded in San Francisco? | | | One-hop | successor, spouse, children, | | | | Head-Relation | | | | | Existence | parentCompany, capital, garrison, nickname, mascot, | What is the name of {head}'s {relation}? | What is the name of Obama's child? | | youthclubs, predecessor, child, precededBy, religion, awards, award college, university | When did {head} attend {relation}? | When did Obama attend university? | | | president, primeMinister, | When was {tail} {relation}? | When was Obama President? | | | vicepresident, primeminister, | Where was {tail} {relation}? | Where was Biden Vice President? | | | Tail-Relation | vicePresident | What country was {tail} {relation}? | What country was Obama President? | | Table 9: Templates for structural presupposition. | | | | Table 7: Templates for Existence claims. | Graph | Claim Example | Label | | |---------|-----------------|-----------------------------------------------------------|-----------| | r2 | r4 | AIDAstella was built by Meyer Werft in Papenburg. | SUPPORTED | | a | m | p | | | r2 | r4 | AIDAstella was built by Meyer Werft in New York. | REFUTED | | a | m | n | | | r2 | r4 | AIDAstella was not built by Meyer Werft in New York. | REFUTED | | a | m | n | | | r2 | r4 | AIDAstella was built by Meyer Werft, not in New York. | SUPPORTED | | a | m | n | | | r2 | r4 | AIDAstella was not built by Meyer Werft, not in New York. | REFUTED | | a | m | n | | Table 10: r2: shipBuilder, r4: location, m: Meyer Werft, a: AIDAstella, n: New York, p: Papenburg. | Graph | Claim Example | Label | | |---------|-----------------|--------------------------------------------------------|-----------| | r2 | r4 | AIDAstella was built by Meyer Werft in Papenburg. | SUPPORTED | | a | m | p | | | r2 | r4 | AIDAstella was built by Samsung in Papenburg. | REFUTED | | a | s | p | | | r2 | r4 | AIDAstella was not built by Samsung in Papenburg. | REFUTED | | a | s | p | | | r2 | r4 | AIDAstella was built by Samsung, not in Papenburg. | REFUTED | | a | s | p | | | r2 | r4 | AIDAstella was not built by Samsung, not in Papenburg. | SUPPORTED | | a | s | p | | Table 11: r2: shipBuilder, r4: location, m: Meyer Werft, a: AIDAstella, p: Papenburg, s: Samsung. | Graph | Claim Example | Label | | |---------|-----------------|---------------------------------------------------------|------------| | r2 | r4 | AIDAstella was built by a company in Papenburg. | SUPPORTED | | s | x | p | | | r2 | r4 | AIDAstella was built by a company in New York. | REFUTED | | s | x | n | | | r2 | r4 | AIDAstella was not built by a company in New York. | PATH CHECK | | s | x | n | | | r2 | r4 | AIDAstella was built by a company, not in New York. | PATH CHECK | | s | x | n | | | r2 | r4 | AIDAstella was not built by a company, not in New York. | PATH CHECK | | s | x | n | | Table 12: r2: shipBuilder, r4: location, s: AIDAstella, n: New York. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? after 5.conclusion, we added limitation A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? on the first page, 1. introduction ✓ A4. Have you used AI writing assistants when working on this paper? grammarly and translation ## B ✓ **Did You Use Or Create Scientific Artifacts?** Publicly Available Webnlg B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** 4 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 3.3 D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
labrak-etal-2023-drbert
{D}r{BERT}: A Robust Pre-trained Model in {F}rench for Biomedical and Clinical domains
https://aclanthology.org/2023.acl-long.896
In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained.
# Drbert: A Robust Pre-Trained Model In French For Biomedical And Clinical Domains Yanis Labrak∗1,4 **Adrien Bazoge**∗2,3 Richard Dufour2 **Mickael Rouvier**1 Emmanuel Morin2 Béatrice Daille2 **Pierre-Antoine Gourraud**3 1LIA, Avignon Université 2Nantes Université, École Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France 3Clinique des données, CHU de Nantes, Nantes Université 4Zenidoc {firstname.lastname}@univ-avignon.fr {firstname.lastname}@univ-nantes.fr ## Abstract In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains. In this paper, we propose an original study of PLMs in the medical domain on French language. We compare, for the first time, the performance of PLMs trained on both public data from the web and private data from healthcare establishments. We also evaluate different learning strategies on a set of biomedical tasks. In particular, we show that we can take advantage of already existing biomedical PLMs in a foreign language by further pre-train it on our targeted data. Finally, we release the first specialized PLMs for the biomedical field in French, called DrBERT, as well as the largest corpus of medical data under free license on which these models are trained. ## 1 Introduction During the last years, pre-trained language models (PLMs) have been shown to significantly improve performance on many Natural Language Processing (NLP) tasks. Recent models, such as BERT (Devlin et al., 2018) or RoBERTa (Liu et al., 2019), are more and more taking advantage of the huge quantities of unlabeled data thanks to recent unsupervised approaches like masked language models based on the Transformers architecture (Vaswani et al., 2017). Most of these PLMs are frequently pre-trained on general domain corpora, such as news articles, books or encyclopedia. An additional step of fine-tuning can also be applied to use these PLMs on a targeted task (Devlin et al., 2018). Although these generic models are used in various contexts, recent works have shown that optimal performance in specialized domains, such as *Equal contribution. finance (Yang et al., 2020), medical (Yang et al., 2022) or traveling (Zhu et al., 2021), can only be achieved using PLMs adapted to the targeted conditions. The adaptation of language models to a domain generally follows two strategies. The first is the training *from scratch* of a new model using only textual data of the targeted specialty. The second approach, called *continual pre-training* (Howard and Ruder, 2018), pursues the training of already pre-trained models, allowing them to pass from a generic model to a specialized one. Even if studies have shown that the first strategy generally offers better performance (Lee et al., 2019), the second requires a much more constrained number of resources (Chalkidis et al., 2020; El Boukkouri et al., 2022) whether in terms of computing resources or of amount of data. However, domain specific data are generally difficult to obtain, resulting in quite a few specialized PLMs available. This difficulty is even greater for languages other than English. For the medical domain, data produced during clinical care contain the finesse of medical reasoning and are the most prevalent in terms of quantity. However, they are rarely accessible due to patient privacy constraints. In this paper, we describe and freely disseminate DrBERT, the first RoBERTa-based PLMs specialized in the biomedical field in French, and the corpus that had allowed their trainings. We also propose an original study concerning the evaluation of different language model pre-training strategies for the medical field, while comparing it with our model derived from clinical private data, called ChuBERT. In our experiments, PLMs with publicly available biomedical data can result in similar or better performance compared to highly specialized private data collected from hospital reports or to larger corpora having only generic data. Our contributions can be summarized as follows: - A new benchmark aggregating a set of NLP tasks in the medical field in French has been set up, making it possible to evaluate language models at the syntactic and semantic level (multi-label classification, part-of-speech tagging, named entity recognition, etc.). - A large textual data collection, called NACHOS, crawled from multiple biomedical online sources. - The construction and evaluation of the first open-source PLMs in French for the biomedical domain based on RoBERTa architecture, called DrBERT, including the analysis of different pre-training strategies. - A set of models using both public and private data trained on comparable data sizes. These models were then compared by evaluating their performance on a wide range of tasks, both public and private. - The free distribution of the NACHOS corpus and of the public PLMs under the open-source MIT license1. ## 2 Related Work BERT (Devlin et al., 2018) is a contextualized word representation model based on the concept of masked language model and pre-trained using bidirectional Transformers (Vaswani et al., 2017). Since its release, it obtains state-of-the-art (SOTA) performance on almost every NLP tasks, while requiring minimal task-specific architectural modifications. However, the training cost of such a model is very high in terms of computation due to the complexity of each training objective and the quantity of data needed. Consequently, new methods emerge and propose more effective ways of performing pre-training. One of them is RoBERTa (Liu et al., 2019). In order to improve the initial BERT model, the authors made some simple design changes in its training procedure. They modify the masked-language strategy to perform dynamic masking, remove the next sentence prediction task, increase dramatically the batch sizes and use significantly more data during a longer training period. Nowadays, RoBERTa is the standard model for a lot of NLP tasks and languages, including French with CamemBERT model (Martin et al., 2020). 1https://drbert.univ-avignon.fr/ Recently, multiple language models have been developed for biomedical and clinical fields through unsupervised pre-training of Transformerbased architectures, mainly for English language. One of the first models was BioBERT (Lee et al., 2019), which is based on the initially pretrained BERT model and further pre-trained using biomedical-specific data through continual pretraining. Other models like BlueBERT (Peng et al., 2019) and ClinicalBERT (Huang et al., 2019) also used this approach on various data sources. An alternative method, when enough in-domain data is available, is to directly pre-train models from scratch (SciBERT (Beltagy et al., 2019), PubMedBERT (Gu et al., 2021), etc.). Note that SciBERT was trained on mixed-domain data from biomedical and computer science domains, while PubMedBERT on biomedical data only. Gu et al. (2021) disputed the benefits of mixed-domain data for pretraining, based on results obtained on tasks from BLURB benchmark. In other languages than English, BERT-based models are much rarer and primarily rely on continual pre-training. Examples include German (Shrestha, 2021), Portuguese (Schneider et al., 2020), and Swedish (Vakili et al., 2022). Only the Spanish (Carrino et al., 2021) and Turkish (Türkmen et al., 2022) models were trained from scratch with biomedical and clinical data from various sources. For French, there is, to our knowledge, no publicly available model specifically built for the biomedical domain. ## 3 Pre-Training Datasets In the biomedical domain, previous works (Gu et al., 2021) on PLMs highlighted the importance of matching the data sources used for its training to the targeted downstream tasks. Due to their sensitive nature (protection of user data, protected health information of patients, etc.), medical data are extremely difficult to obtain. Massive collection of web data related to this domain appears to be a solution that can overcome this lack. However, these web documents vary in terms of quality. No comparison has been made between PLMs based on specific domain data from the web and those on private documents from clinical data warehouses, whose quality can be controlled. We extracted two different medical datasets for French. The first one gathers data crawled from a variety of free-of-use online sources, and the second one private hospital stays reports from the Nantes University Hospital. Table 1 gives a general overview of the two collected corpora. The public web-based data, detailed in Section 3.1, allowed the constitution of a corpus, called NACHOS*large*, containing 7.4 GB of data. The private dataset, called NBDW*small* is described in Section 3.2 and contains 4 GB of data. In order to perform comparable experiments, we extracted a NACHOS sub-corpus (NACHOS*small*) of the same size as the private data. Finally, Section 3.3 describes the pre-processing applied to both datasets. | Corpus | Size | #words | #sentences | |---------------------|--------|----------|--------------| | NACHOSlarge (pub.) | 7.4 GB | 1.1 B | 54.2 M | | NACHOSsmall (pub.) | 4 GB | 646 M | 25.3 M | | NBDWsmall (private) | 4 GB | 655 M | 43.1 M | | NBDWmixed (both) | 4+4 GB | 1.3 B | 68.4 M | Table 1: Overview of the public (NACHOS) and private (NBDW) collected datasets. NACHOS corpus is now freely available online2. ## 3.1 Public Corpus - Nachos | Resource name | # words | |-------------------------------|---------------| | HAL | 638,508,261 | | Haute Autorité de Santé (HAS) | 113,394,539 | | Drug leaflets | 74,770,229 | | Medical Websites Scrapping | 64,904,334 | | ANSES SAISINE | 51,372,932 | | Public Drug Database (BDPM) | 48,302,695 | | ISTEX | 44,124,422 | | CRTT | 26,210,756 | | WMT-16 | 10,282,494 | | EMEA-V3 | 6,601,617 | | Wikipedia Life Science French | 4,671,944 | | ANSES RCP | 2,953,045 | | Cerimes | 1,717,552 | | LiSSa | 235,838 | | DEFT-2020 | 231,396 | | CLEAR | 225,898 | | CNEDiMTS | 175,416 | | QUAERO French Medical Corpus | 72,031 | | ANSM Clinical Study Registry | 47,678 | | ECDC | 44,482 | | QualiScope | 12,718 | | WMT-18-Medline | 7,673 | | Total | 1,088,867,950 | We introduce the opeN crAwled frenCh Healthcare cOrpuS (NACHOS), a French medical opensource dataset compiled by crawling a variety of textual sources around the medical topic. It consists of more than one billion words, drawn from 24 French speaking high-quality websites. The corpus includes a wide range of medical information: descriptions of diseases and conditions, information on treatments and medications, general health-related advice, official scientific meeting reports, anonymized clinical cases, scientific literature, thesis, French translation pairs, university health courses and a large range of data obtained from raw textual sources, web scrapping, and optical character recognition (OCR). Table 2 summarizes the different data sources of NACHOS. We use heuristics to split the texts into sentences and aggressively filter out short or low-quality sentences like those obtained from OCR. Finally, we classified them into languages by using our own classifier trained on the multilingual Opus EMEA (Tiedemann and Nygaard, 2004) and MASSIVE (FitzGerald et al., 2022) corpora to keep only the sentences in French. For the 4 GB version of NACHOS (NACHOS*small*), we shuffled the whole corpus and selected randomly 25.3M sentences to maximize data sources homogeneity. The full Table 2: Sources of the NACHOS corpus. ## 3.2 Private Corpus - Nbdw The private corpus, called Nantes Biomedical Data Warehouse (NBDW), was obtained using the data warehouse from Nantes University Hospital. This data warehouse includes different dimensions of patients' related data: socio-demographic, drug prescriptions and other information associated with consultation or hospital stays (diagnosis, biology, imagery, etc.). The authorization to implement and exploit the NBDW dataset was granted in 2018 by the CNIL (Commission National de l'Informatique et des Libertés), the French independent supervisory authority in charge of application of national and European data privacy protection laws; authorization N°2129203. For this work, a sample of 1.7 million deidentified hospital stays reports was randomly selected and extracted from the data warehouse. As described in Table 3, the reports are from various hospital departments, emergency medicine, gynecology and ambulatory care being the most frequent. Each reports was split into tokens sequence with an average of 15.26 words per sequence. Then, all tokens sequences from all reports were shuffled to build the corpus. This corpus contains 655M words, from 43.1M sentences, for a total size of approximately 4 GB. 2https://drbert.univ-avignon.fr/ | Medical Specialty | # documents | # words | |--------------------------|---------------|-------------| | Other | 474,588 | 192,832,792 | | Emergency Medicine | 235,579 | 90,807,406 | | Ambulatory Care | 119,149 | 50,975,472 | | Consultation | 95,135 | 38,335,804 | | Gynecology | 132,983 | 38,204,495 | | Cardiology | 29,633 | 22,654,583 | | Medical Oncology | 45,603 | 22,587,869 | | Gastroenterology | 46,600 | 21,340,794 | | Orthopaedic Surgery | 82,084 | 18,983,791 | | Hematology | 41,776 | 18,285,983 | | Critical Care Medicine | 20,819 | 16,472,785 | | Otolaryngology | 69,343 | 16,131,214 | | Dermatology | 51,804 | 15,035,412 | | Rheumatology | 31,527 | 14,647,543 | | Urology | 51,535 | 14,272,231 | | Colon and Rectal Surgery | 45,987 | 13,334,550 | | Internal Medicine | 23,904 | 13,282,253 | | Psychiatry | 26,628 | 12,496,503 | | Neurosurgery | 34,481 | 10,360,533 | | Nephrology | 19,171 | 9,548,533 | | Ophthalmology | 19,700 | 4,464,515 | | Total | 1,698,029 | 655,055,061 | Table 3: Sources of the NBDW corpus. ## 3.3 Pre-Processing Step The supplied text data has been split into subword units using SentencePiece (Kudo and Richardson, 2018), an extension of Byte-Pair encoding (BPE) (Sennrich et al., 2016) and WordPiece (Wu et al., 2016) that does not require pre-tokenization (at the word or token level), thereby avoiding the requirement for language-specific tokenizers. We employ a vocabulary size of 32k subword tokens. For each model pre-trained from scratch (see Section 4.2), tokenizers were built using all the sentences from the pre-training dataset. ## 4 Models Pre-Training In this section, we describe the pre-training modalities of our studied models from two points of view: 1) the influence of the data used (size and nature), and 2) the pre-training strategies of the models. These two levels are respectively detailed in Sections 4.1 and 4.2. Section 4.3 finally presents the existing state-of-the-art pre-trained models that will be used for comparison purposes. ## 4.1 Influence Of Data One issue is to identify the amount of data required to create a model that performs well and can compete with models trained on general domains. Recent studies, such as those by Zhang et al. (2020) and Martin et al. (2020), discuss the impact of the size of pre-training data on model performance. According to these studies, some tasks are performing better with fewer data while others, such as commonsense knowledge and reasoning tasks, keep improving performance when pre-training data are added. In the medical field, no study has been conducted to compare the impact of varying the amount of domain-specific data during pre-training, or to assess the impact of the supposedly variable quality of the data depending on their source of collection. We thus propose to evaluate the pre-training of several language models on either NACHOS*small* or NBDW*small* corpus, as described in Section 3. Additionally, we propose a model pre-trained on NACHOS*large* to investigate if having almost twice as much data improves model performance. Finally, a combination of both public NACHOS*small* and NBDW*small* sources for a total of 8 GB (NBDW*mixed*) is explored, to demonstrate if combining private and public data is a viable approach in low-resource domains. ## 4.2 Pre-Training Strategies In addition to the analysis on the size and the sources of data, we also seek to evaluate three training strategies of PLMs for the medical domain: - Training a full model from scratch, including the subword tokenizer. - Continuing the pre-training of the state-of-theart language model for French, called CamemBERT, on our medical-specific data while keeping the initial tokenizer. - Continuing the pre-training of a state-of-theart domain specific language model for medical but here in English, called PubMedBERT, on our French data while keeping the initial tokenizer. Regarding the last strategy, our objective is to compare the performance of an English medical model further pre-trained on our French medical data, against another one based on a generic French model. Indeed, the medical domains shares many terms across languages that make relevant the mixture of resources from two languages. Table 4 summarizes all the configurations evaluated in this paper, integrating both the study of data size and pre-training strategies. | Model name | Strategy | Corpus | |--------------|------------------------|-------------| | DrBERT | From scratch | NACHOSlarge | | DrBERT | From scratch | NACHOSsmall | | ChuBERT | From scratch | NBDWsmall | | ChuBERT | From scratch | NBDWmixed | | CamemBERT | continual pre-training | NACHOSsmall | | PubMedBERT | continual pre-training | NACHOSsmall | | CamemBERT | continual pre-training | NBDWsmall | Model architecture All models pre-trained fromscratch use the CamemBERT base configuration, which is the same as RoBERTa base architecture (12 layers, 768 hidden dimensions, 12 attention heads, 110M parameters). We did not train the large version of our models due to resources limitations. Language modeling We train the models on the Masked Language Modeling (MLM) task using HuggingFace library (Wolf et al., 2019). It consists of randomly replacing a subset of tokens from the sequence by a special token, and asking the model to predict them using cross-entropy loss. In BERT and RoBERTa models (including CamemBERT), 15% of the tokens are randomly selected. Of those selected tokens, 80% are replaced with the <mask> token, 10% remain unchanged and 10% are randomly replaced by a token from the vocabulary. We keep this masking probability of 15% for the training of our models. Optimization & Pre-training We optimize the models for 80k steps with batch sizes of 4,096 sequences, each sequence filled with 512 tokens, allowing to process 2.1M tokens per step. The learning rate is warmed up linearly for 10k steps, going up from zero to the initial 5×10-5 learning rate. Models are trained on 128 Nvidia V100 32 GB GPUs for 20 hours on Jean Zay supercomputer. We use mixed precision training (FP16) (Micikevicius et al., 2017) to reduce the memory footprint, allowing us to enlarge the batch size to 32 sequences on each GPU. ## 4.3 Baseline Models We describe some existing pre-trained models used as baselines in our comparative study. CamemBERT (Martin et al., 2020) is a RoBERTa based model pre-trained totally from scratch on the French subset of OSCAR corpus (138 GB). In our case, this model is our main baseline to compare our results on, since it is the stateof-the-art model for French. We also use the 4 GB model's variants of CamemBERT to compare the impact of the nature and quantity of the data. PubMedBERT (Gu et al., 2021) is a BERT based biomedical-specific model pre-trained totally from scratch on the 3.1 billions words of PubMed corpus (21 GB). ClinicalBERT (Huang et al., 2019) is a clinicalspecific model based on BERT tokenizer and weights, which has been further pre-trained on the 0.5 billion words of MIMIC corpus (3.7 GB). BioBERT v1.1 (Lee et al., 2019) is a biomedicalspecific model based on BERT tokenizer and weights which has been further pre-trained using the 4.5 billion words of PubMed corpus. ## 5 Downstream Evaluation Tasks To evaluate the different pre-training configurations of our models, a set of tasks in the medical domain is necessary. While this NLP domain-specific benchmark exists in English (BLURB (Gu et al., 2021)), none exist for French. In this section, we describe an original benchmark, summarized in Table 5, integrating various NLP medical tasks for French. Among them, some are from publiclyavailable datasets (Section 5.1), allowing the replication of our experiments. Other tasks come from private datasets (Section 5.2) and cannot be shared. However, they are useful to evaluate our models more accurately. ## 5.1 Publicly-Available Tasks ESSAIS / CAS: French Corpus with Clinical Cases The ESSAIS (Dalloux et al., 2021) and CAS (Grabar et al., 2018) corpora respectively contain 13,848 and 7,580 clinical cases in French. Some clinical cases are associated with discussions. A subset of the whole set of cases is enriched with morpho-syntactic (part-of-speech (POS) tagging, lemmatization) and semantic (UMLS concepts, negation, uncertainty) annotations. In our case, we focus only on the POS tagging task. FrenchMedMCQA The FrenchMedMCQA corpus (Labrak et al., 2022) is a publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It contains 3,105 questions coming from real exams of the French medical specialization diploma in pharmacy, integrating single and multiple answers. QUAERO French Medical Corpus The QUAERO French Medical Corpus (Névéol et al., | Thematic / Corpus name | Task | Metric | Train | Dev | Test | |---------------------------------------------------------------|----------------------------|---------------------|---------|-------|--------| | Public Corpus | | | | | | | ESSAIS (Dalloux et al., 2021) | POS Tagging | Macro F1 | 9,693 | 2,077 | 2,078 | | CAS: French Corpus with Clinical Cases (Grabar et al., 2018) | POS Tagging | Macro F1 | 5,306 | 1,137 | 1,137 | | MUSCA-DET - Social Determinants of Health extraction (Task 1) | Nested NER | Macro F1 | 19,861 | 2,207 | 5,518 | | MUSCA-DET - Social Determinants of Health extraction (Task 2) | Multi-label Classification | Macro F1 | 19,861 | 2,207 | 5,518 | | QUAERO French Medical Corpus - EMEA (Névéol et al., 2014) | Nested NER | Weighted F1 | 11 | 12 | 15 | | QUAERO French Medical Corpus - MEDLINE (Névéol et al., 2014) | Nested NER | Weighted F1 | 833 | 832 | 833 | | FrenchMedMCQA (Labrak et al., 2022) | MCQA | EMR / Hamming Score | 2,171 | 312 | 622 | | Private Corpus | | | | | | | Medical report acute heart failure structuration | Named Entity Recognition | Macro F1 | 2,527 | 281 | 703 | | Acute heart failure (aHF) classification | Binary Classification | Macro F1 | 1,179 | 132 | 328 | | Technical Specialties Sorting | Classification Multi-class | Macro F1 | 4,413 | 1,470 | 1,473 | | Medical report structuration prescriptions | Named Entity Recognition | Macro F1 | 61 | 15 | 26 | 2014) introduces an extensive corpus of biomedical documents annotated at the entity and concept levels to provide NER and classification tasks. Three text genres are covered, comprising a total of 103,056 words obtained either from EMEA or MEDLINE. Ten entity categories corresponding to UMLS (Bodenreider, 2004) Semantic Groups were annotated, using automatic pre-annotations validated by trained human annotators. Overall, a total of 26,409 entity annotations were mapped to 5,797 unique UMLS concepts. To simplify the evaluation process, we sort the nested labels by alphabetical order and concatenate them together into a single one to transform the task to a usable format for token classification with BERT based architectures. MUSCA-DET MUSCA-DET is a French corpus of sentences extracted from the "Lifestyle" section in clinical notes from Nantes University Hospital biomedical data warehouse. The corpus contains 27,000 pseudonymized sentences annotated with 26 entities related to Social Determinants of Health (living, marital status, housing, descendants, employment, alcohol, smoking, drug abuse, physical activity). The corpus includes two tasks: nested name entity recognition (NER) and multi-label classification. ## 5.2 Private Tasks Technical Specialties Sorting This classification task has to assign the specialty of a medical of a medical report based on its transcription. The dataset consists of 7,356 French medical reports that have been manually annotated and equally sampled across 6 specialties: Psychiatry, Urology, Endocrinology, Cardiology, Diabetology, and Infectiology. Medical report structuration prescriptions (NER) The task seeks to identify named entities in a gold sample of 100 long medical reports obtained from French speech transcriptions. The named entities are annotated using the BIO format and fall into 12 classes: O, AGE, CITY, *DATE*, EMAIL, HOSPITAL, PHONE, DOSAGE, DURATION, FORM, *MEDICATION* and *POSOLOGY*. Medical report acute heart failure structuration (NER) This corpus contains 350 hospital stay reports (divided into 3,511 sentences) from Nantes University Hospital. The reports are annotated with 46 entity types related to the following clinical information: cause of chronic heart failure, triggering factor for acute heart failure, diabetes, smoking status, heart rate, blood pressure, weight, height, medical treatment, hypertension and left ventricular ejection fraction. Overall, the corpus contains 6,116 clinical entities. Acute heart failure (aHF) classification This task consists of the classification of hospital stays reports according to the presence or absence of a diagnostic of acute heart failure. This corpus consists of 1,639 hospital stays reports from Nantes university hospital, which are labeled as positive or negative to acute heart failure. ## 6 Results And Discussions As previously described, we evaluate the performance of our pre-trained language models proposed for the biomedical domain on a set of public and private NLP downstream tasks related to the medical domain. We first propose to analyze the results according to the different pre-training strategies used (Section 6.1) then to focus on the impact of the pre-training data, whether in terms of size or nature (Section 6.2). Finally, we are interested in the generalization capacities of our domain-specific CamemBERT OSCAR 138 GB 40.89 35.22 35.13 81.90 79.12 80.13 87.98 91.66 89.35 99.32 99.09 99.20 CamemBERT OSCAR 4 GB 46.32 43.17 42.66 81.49 81.42 81.41 87.79 90.74 88.78 99.53 99.69 99.61 CamemBERT CCNET 4 GB 47.25 42.2 43.11 82.02 79.30 79.98 87.61 92.28 89.34 99.54 99.55 99.55 PubMedBERT 52.61 46.30 47.22 78.17 76.18 76.86 87.07 92.61 89.20 99.25 99.51 99.37 ClinicalBERT 50.11 44.15 44.70 80.13 75.92 77.12 87.04 92.14 88.77 98.58 98.62 98.58 BioBERT v1.1 49.37 47.25 46.01 79.69 78.51 79.00 **88.17** 91.80 89.38 98.59 99.03 98.80 DrBERT NACHOS*large* 55.29 46.66 48.22 81.33 81.25 81.25 87.99 **92.80 89.83** 99.82 **99.90 99.86** DrBERT NACHOS*small* 54.55 43.39 45.93 79.85 80.10 79.87 87.57 92.76 89.44 **99.85** 99.85 99.85 ChuBERT NBDW*small* **56.92** 47.46 49.01 81.03 **82.67** 81.56 87.76 92.63 89.58 99.76 **99.90** 99.83 ChuBERT NBDW*mixed* 54.62 47.81 **49.14** 82.23 81.71 **81.98** 87.42 92.36 89.30 99.81 99.82 99.81 CamemBERT NACHOS*small* 22.02 16.67 16.08 74.86 69.82 69.80 65.72 68.49 66.74 99.44 99.67 99.54 PubMedBERT NACHOS*small* 53.44 **48.21** 48.72 **83.06** 80.39 81.40 87.35 92.69 89.36 99.52 99.58 99.55 CamemBERT NBDW*small* 25.44 19.33 19.12 79.50 74.74 76.02 68.80 71.23 69.64 99.60 99.57 99.58 | aHF NER | aHF classification | NER Medical Report | Specialities Classification | | | | | | | | | |-----------|----------------------|----------------------|-------------------------------|----|----|----|----|----|----|----|----| | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | models by applying and comparing them on general domain NLP tasks (Section 6.3). Note that all the PLMs have been fine-tuned in the same way for all downstream tasks and all the reported results are obtained by averaging the scores from four runs. Performance on biomedical downstream tasks are reported in Tables 6 and 7 for respectively private and public tasks. For readability reasons, the first part of each table presents the existing baseline models results, the second part our specialized models trained from-scratch, and the last part our models using continual pretraining. ## 6.1 Impact Of Pre-Training Strategies As observed both in Tables 6 and 7, models pretrained completely from scratch (DrBERT NACHOS and ChuBERT NBDW) tend to produce the best results for both types of data sources and tasks (*i.e.* private and public). Indeed, considering the F1-score, they obtain the best results on all private tasks and on almost all public ones (5 tasks out of 7). The two public remaining tasks (MUSCADET T2 and QUAERO-MEDLINE) are then better handled using PubMedBERT NACHOS*small*, a model that has already been pre-trained on domainspecific data (biomedical English data) then further pre-trained with our French medical data (NACHOS*small*). We also observed that continual pre-training from domain generic models (CamemBERT NACHOS*small* or CamemBERT NBDW*small*) does not allow reaching the performance of the other specific models, neither of these two models reaching the first or second place (in terms of performance) on any task. Finally, the baseline models trained on generic data (CamemBERT OSCAR) and those trained on biomedical data in English (PubMedBERT, ClinicalBERT and BioBERT) remain competitive in few biomedical public tasks (CAS POS, FrenchMCQA or MUSCA-DET T2), while none of them are placed in first or second place on private tasks. This seems to highlight the difficulty of private tasks when non-matching data are used. ## 6.2 Effect Of Data Regarding the amount of data used for pre-training models (*small* vs. large or *mixed*), results show that, the larger the data are, the better the model performs, no matter the pre-training strategy or the source of data (private or public). However, the difference is very low for most tasks, with *small* systems often being ranked second behind large models, even though they contain half as much data. We notice a clear dominance of models that were pre-trained on web-based sources, specifically OSCAR and NACHOS, when applied to public tasks. Indeed, models relying on private NBDW data only achieve the best performance (in terms of F1-score) on the MUSCA-DET T1 task. This trend is not quite observed on private tasks, where NBDWbased models obtain more acceptable or even better performance when mixed with public biomedical data (ChuBERT NBDW*mixed*), as seen in Table 6. We believe this discrepancy is mainly due to the different nature of processed data. Finally, we observe that English-based models perform closely to the French-based CamemBERT model. This shows the usefulness of pre-training on domain specific data. For example, better results are obtained with continual pre-training of the PubMedBERT model with our specialized data in French (PubMedBERT NACHOS*small*), corroborating our hypothesis about the effectiveness of cross-language knowledge transfer. | MUSCA-DET T1 MUSCA-DET T2 | ESSAI POS | CAS POS | FrenchMedMCQA QUAERO-EMEA QUAERO-MEDLINE | | | | | | | | | | | | | | | | | |------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|-----------|--------------------------------------------|-------------------------------|-------|----|----|----|----|----|----|---------|-----|----|----|----|----|----|----| | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | Hamming | EMR | P | R | F1 | P | R | F1 | | CamemBERT OSCAR 138 GB | 89.04 88.59 88.54 89.87 87.12 88.20 81.57 81.01 81.10 96.37 94.53 95.22 | 36.24 | 16.55 | 90.57 91.06 90.71 76.58 78.67 | 77.41 | | | | | | | | | | | | | | | | CamemBERT OSCAR 4 GB | 86.09 85.45 85.43 92.68 90.34 91.27 84.01 83.51 83.69 98.15 95.34 96.42 | 35.75 | 15.37 | 90.75 91.16 90.83 78.55 79.33 | 78.76 | | | | | | | | | | | | | | | | CamemBERT CCNET 4 GB | 91.12 89.91 90.33 93.10 90.42 91.38 85.60 85.63 85.42 98.19 96.75 97.33 | 34.71 | 14.41 | 90.31 90.59 90.33 78.06 78.11 | 77.61 | | | | | | | | | | | | | | | | PubMedBERT | 93.04 91.45 91.99 84.41 80.60 81.97 88.43 87.93 87.78 97.40 94.86 95.90 | 33.98 | 14.14 | 86.89 87.33 86.79 77.33 77.28 | 77.09 | | | | | | | | | | | | | | | | ClinicalBERT | 91.79 89.44 90.36 85.43 81.23 82.95 89.09 88.78 88.24 97.94 95.88 96.73 | 32.78 | 14.19 | 84.91 85.47 84.79 75.56 74.85 | 75.05 | | | | | | | | | | | | | | | | BioBERT 1.1 | 91.82 89.82 90.46 85.52 80.14 81.91 86.76 84.90 85.18 98.10 96.39 97.12 | 36.19 | 15.43 | 84.55 85.03 84.29 72.62 73.30 | 72.68 | | | | | | | | | | | | | | | | DrBERT NACHOSlarge | 92.10 90.27 91.04 94.97 90.41 92.24 90.96 89.19 89.75 97.37 94.49 95.65 | 36.66 | 15.32 | 91.93 92.52 92.09 77.85 78.54 | 77.88 | | | | | | | | | | | | | | | | DrBERT NACHOSsmall | 93.35 90.62 91.77 91.31 86.60 88.57 90.12 88.37 88.76 97.04 94.88 95.70 | 37.37 | 13.34 | 91.54 92.00 91.66 77.91 79.34 | 78.18 | | | | | | | | | | | | | | | | ChuBERT NBDWsmall | 94.88 90.79 92.23 94.77 90.27 92.17 88.53 87.73 87.71 97.00 94.65 95.61 | 35.16 | 14.79 | 88.11 88.78 88.15 75.05 76.57 | 74.94 | | | | | | | | | | | | | | | | ChuBERT NBDWmixed | 94.39 91.93 92.73 94.22 90.02 91.71 86.36 85.50 85.73 97.77 95.30 96.35 | 34.58 | 12.21 | 90.36 90.94 90.52 78.61 79.32 | 78.63 | | | | | | | | | | | | | | | | CamemBERT NACHOSsmall 81.44 81.39 80.96 79.74 78.08 78.70 80.59 79.88 80.04 95.64 91.57 92.46 | 32.87 | 13.76 | 67.56 77.48 71.10 55.45 62.34 | 57.43 | | | | | | | | | | | | | | | | | PubMedBERT NACHOSsmall 92.51 91.49 91.53 94.95 92.55 93.62 84.73 83.80 83.85 97.82 96.12 96.81 | 35.88 | 15.21 | 90.97 91.27 91.03 82.03 81.71 | 81.73 | | | | | | | | | | | | | | | | | CamemBERT NBDWsmall | 82.35 81.59 81.57 78.14 76.38 77.12 79.44 79.79 79.25 95.98 92.11 93.18 | 27.73 | 11.89 | 53.44 73.11 61.75 48.71 61.33 | 53.05 | | | | | | | | | | | | | | | Table 7: Performance on public biomedical downstream tasks. Best model in bold and second is underlined. ## 6.3 Performance On General-Domain Tasks Table 8 gives the results obtained by all PLMs on general domain downstream tasks. These tasks come from Martin et al. (2020) who used them to evaluate the CamemBERT model. The first four are POS tagging tasks (GSD, SEQUOIA, SPOKEN and PARTUT), the last being a natural language inference task (XNLI). All results of our models decrease in performance on all tasks. The most important drop is for the natural language inference task, with a performance of ChuBERT NBDW*small* almost 13% lower than CamemBERT 138 GB. We also observe that the specialized models in English are as efficient as our biomedical models in French. It seems quite clear from the previous observations that specialized models are difficult to generalize to other tasks, but that specialized information captured in one language could transfer to another language. ## 7 Conclusion In this work, we proposed the first biomedical and clinical Transformer-based language models, based on RoBERTa architecture, for French language. An extensive evaluation study of these specific models has been performed on an aggregated collection of diverse private and public medical tasks. Our open-source DrBERT models improved the state of the art in all medical tasks against both French general model (CamemBERT) and English medical ones (BioBERT, PubMedBERT and ClinicalBERT). In addition, we showed that pre-training on constrained resources (4 GB) of web-crawled medical makes it possible to compete with, and even frequently surpass, models trained with specialized data from medical reports. Results also highlighted that continual pretraining on an existing domain-specific English model, here PubMedBERT, is a more viable solution than on a French domain-generalist model while targeting French biomedical downstream tasks. It needs to further investigate the performance of this approach using more data, similar to what we have done with DrBERT NACHOS*large*. The pre-trained models as well as the pretraining scripts3 have been publicly released online under a MIT open-source license. The main purpose of NACHOS dataset is to promote the development of robust NLP tools by the community, 3https://drbert.univ-avignon.fr/ | GSD | SEQUOIA | SPOKEN | PARTUT | XNLI | | |------------------------|-----------|----------|----------|--------|-------| | CamemBERT OSCAR 138 GB | 98.28 | 98.68 | 97.26 | 97.70 | 81.94 | | CamemBERT OSCAR 4 GB | 98.14 | 99.18 | 97.57 | 97.86 | 81.76 | | CamemBERT CCNET 4 GB | 98.18 | 98.92 | 97.20 | 97.92 | 81.26 | | PubMedBERT | 96.48 | 96.49 | 90.00 | 93.97 | 73.79 | | ClinicalBERT | 96.49 | 96.31 | 89.60 | 93.17 | 70.57 | | BioBERT v1.1 | 97.32 | 96.54 | 91.81 | 94.52 | 71.54 | | DrBERT NACHOSlarge | 96.94 | 98.05 | 95.92 | 96.54 | 72.18 | | DrBERT NACHOSsmall | 97.17 | 98.21 | 96.38 | 96.45 | 72.86 | | ChuBERT NBDWsmall | 96.45 | 97.38 | 94.90 | 95.83 | 69.00 | | ChuBERT NBDWmixed | 97.18 | 98.10 | 96.43 | 96.33 | 72.32 | | CamemBERT NACHOSsmall | 97.63 | 96.90 | 91.12 | 94.00 | 71.26 | | PubMedBERT NACHOSsmall | 97.41 | 98.71 | 95.54 | 97.01 | 77.35 | | CamemBERT NBDWsmall | 97.55 | 96.26 | 89.17 | 91.34 | 72.73 | Table 8: Performance on public domain-general downstream tasks. Best model in bold and second is underlined. so, we have decided to make the corpora available for academic research. ## 8 Ethical Considerations Concerning the risks and biases, all the freely available models pre-trained on NACHOS can supposedly be exposed to some of the concerns presented by the work of Bender et al. (2021) and Sheng et al. (2021) since some of the NACHOS sub-corpora quality might be lower than expected, specifically for non-governmental sources. When using a BERT-based biomedical language model, potential biases can be encountered including fairness, gendered language, limited representation and temporal correctness. ## 9 Limitations It is important to mention some limitations of our work. Firstly, it would be wise to evaluate the impact of the tokenizer on the performance of the models to ensure that this is not the main reason for the observed performance gains. Furthermore, we can not affirm in this study whether the medical domain transfer observed from English to French using continual pre-training on PubMedBERT can be generalized to other languages or other domains. Finally, it is possible that training a ChuBERT model with more diverse private clinical data and in a larger quantity could have brought notable performance gains on private tasks. A considerable amount of computational resources was used to conduct this study, since approximately 18,000 hours of GPU computation were used to create the 7 models presented here, as well as about 7,500 hours of GPU for debugging due to technical issues related to model configurations and poor performance, for a total of 25,500 hours. The total environmental cost, according to the Jean Zay supercomputer documentation4is equivalent to 6,604,500 Wh or 376.45 kg CO2eq based on the carbon intensity of the energy grid mention by BLOOM environmental cost study also made on Jean Zay (Luccioni et al., 2022). This makes the present study difficult to reproduce and to transpose to other languages when limited material resources are available. ## Acknowledgments This work was performed using HPC resources from GENCI-IDRIS (Grant 2022- AD011013061R1 and 2022-AD011013715) and from CCIPL (Centre de Calcul Intensif des Pays de la Loire). This work was financially supported by ANR AIBy4 (ANR-20-THIA-0011) and Zenidoc. ## References Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In *Proceedings of the 2021 ACM* Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610–623, New York, NY, USA. Association for Computing Machinery. Olivier Bodenreider. 2004. The unified medical language system (umls): Integrating biomedical terminology. *Nucleic acids research*, 32:D267–70. Casimiro Pio Carrino, Jordi Armengol-Estapé, Asier Gutiérrez-Fandiño, Joan Llop-Palao, Marc Pàmies, Aitor Gonzalez-Agirre, and Marta Villegas. 2021. Biomedical and clinical language models for spanish: On the benefits of domain-specific pretraining in a mid-resource scenario. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: the muppets straight out of law school. *CoRR*, abs/2010.02559. Clément Dalloux, Vincent Claveau, Natalia Grabar, Lucas Emanuel Silva Oliveira, Claudia Maria Cabral Moro, Yohan Bonescki Gumiel, and Deborah Ribeiro Carvalho. 2021. Supervised learning for the detection of negation and of its scope in French and Brazilian Portuguese biomedical corpora. *Natural Language Engineering*, 27(2):181–201. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. *CoRR*, abs/1810.04805. Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, and Pierre Zweigenbaum. 2022. Re-train or Train from Scratch? Comparing Pre-training Strategies of BERT in the Medical Domain. In *Proceedings of* the 13th Language Resources and Evaluation Conference (LREC), pages 2626–2633, Marseille, France. European Language Resources Association. Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2022. Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages. Natalia Grabar, Vincent Claveau, and Clément Dalloux. 2018. CAS: French Corpus with Clinical Cases. In Proceedings of the 9th International Workshop on Health Text Mining and Information Analysis (LOUHI), pages 1–7, Brussels, Belgium. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. ACM Transactions on Computing for Healthcare, 3(1):1–23. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. CoRR, abs/1808.06226. Yanis Labrak, Adrien Bazoge, Richard Dufour, Béatrice Daille, Pierre-Antoine Gourraud, Emmanuel Morin, and Mickael Rouvier. 2022. FrenchMedMCQA: A French Multiple-Choice Question Answering Dataset for Medical domain. In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI), Abou Dhabi, United Arab Emirates. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Alexandra Sasha Luccioni, Sylvain Viguier, and AnneLaure Ligozat. 2022. Estimating the carbon footprint of bloom, a 176b parameter language model. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suá rez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. CamemBERT: a Tasty French Language Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 7203—-7219. Association for Computational Linguistics. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2017. Mixed precision training. *CoRR*, abs/1710.03740. Aurélie Névéol, Cyril Grouin, Jérémy Leixa, Sophie Rosset, and Pierre Zweigenbaum. 2014. The quaero french medical corpus : A ressource for medical entity recognition and normalization. Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: An evaluation of bert and elmo on ten benchmarking datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019), pages 58–65. Elisa Terumi Rubel Schneider, João Vitor Andrioli de Souza, Julien Knafou, Lucas Emanuel Silva e Oliveira, Jenny Copara, Yohan Bonescki Gumiel, Lucas Ferro Antunes de Oliveira, Emerson Cabrera Paraiso, Douglas Teodoro, and Cláudia Maria Cabral Moro Barra. 2020. BioBERTpt - a Portuguese neural language model for clinical named entity recognition. In *Proceedings of the 3rd Clinical Natural Language Processing Workshop*, pages 65–72, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. Manjil Shrestha. 2021. Development of a language model for medical domain. masterthesis, Hochschule Rhein-Waal. Jörg Tiedemann and Lars Nygaard. 2004. The OPUS corpus - parallel and free: http://logos.uio.no/ opus. In *Proceedings of the Fourth International* Conference on Language Resources and Evaluation (LREC'04), Lisbon, Portugal. European Language Resources Association (ELRA). Hazal Türkmen, Oguz Dikenelli, Cenk Eraslan, and ˘ Mehmet Callı. 2022. Bioberturk: Exploring turkish biomedical language model development strategies in low resource setting. Thomas Vakili, Anastasios Lamproudis, Aron Henriksson, and Hercules Dalianis. 2022. Downstream task performance of BERT models pre-trained using automatically de-identified clinical data. In *Proceedings* of the Thirteenth Language Resources and Evaluation Conference, pages 4245–4252, Marseille, France. European Language Resources Association. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E. Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Anthony B. Costa, Mona G. Flores, Ying Zhang, Tanja Magoc, Christopher A. Harle, Gloria Lipori, Duane A. Mitchell, William R. Hogan, Elizabeth A. Shenkman, Jiang Bian, and Yonghui Wu. 2022. A large language model for electronic health records. *npj Digital Medicine*, 5(1):194. Yi Yang, Mark Christopher Siy UY, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications. Yian Zhang, Alex Warstadt, Haau-Sing Li, and Samuel R. Bowman. 2020. When do you need billions of words of pretraining data? Hongyin Zhu, Hao Peng, Zhiheng Lyu, Lei Hou, Juanzi Li, and Jinghui Xiao. 2021. Travelbert: Pre-training language model incorporating domain-specific heterogeneous knowledge into a unified representation. ## A Appendix A.1 Vocabularies Inter-Coverage ![10_Image_0.Png](10_Image_0.Png) As we can see in Figure 3, despite having similar performances, some of the models do not share a lot of mutual vocabulary. ## A.2 Models Stability We observe during the evaluation phase that most of the models based on continual pre-training strategy from CamemBERT OSCAR 138 GB are suffering from bad consistency and stability during fine-tuning, which translates into fluctuation in performance between runs. ![10_image_1.png](10_image_1.png) We also notice during PubMedBERT NACHOS*small* pre-training that the model loss is globally stable during almost all the duration of the pre-training, until reaching the step 71,000, where the loss fall down until touching down zero at step 72,500. ![11_image_0.png](11_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 9 ✓ A2. Did you discuss any potential risks of your work? In the Section 8 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** The pre-trained models used as our baselines are presented in Section 2 and 4.3. The datasets used are listed in the Table 5. For the tools, we cite Huggingface library in Section 4.2 under the paragraph "Language modeling". Our publicly available artifacts are explicitly enumerate in the contributions of our introduction (Section 1) and conclusion (Section 7). Will be made available online once the paper accepted: - Our training scripts (available at the anonymized repository link in the paper) under MIT license. - The web crawled pre-training corpus called NACHOS on Zenodo and HuggingFace (currently private) under CC0 1.0 license. - Our models trained on NACHOS on HuggingFace (currently private) under MIT license. ✓ B1. Did you cite the creators of artifacts you used? We cite the used artifacts models during the Section 2 and 4.3. The datasets used are listed in the Table 5 and cited in the section 5.1 and 6.3. For the tools, we cite Huggingface library in Section 4.2 under the paragraph "Language modeling". ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The licenses are explicited at the end of the introduction (Section 1). B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? We used two corpora for our language models : The first is trained on crawled data, hand-picked from high quality web sources, and doesn't require any anonymization step. For our second model, it's trained on clinical data, and we specify in Section 3.2 that we used de-identified hospital stays reports and comply with GDPR and local authorities since we have official accreditations. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Coverage of domains are presented in Tables 2 and 3. Concerning languages, we only consider French in our study, as the title suggest. Demographic groups represented in the data are not explicited in the paper. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. The datasets used for evaluation and their splits are presented in the Table 5 for both used and created data. ## C ✓ **Did You Run Computational Experiments?** Section 4.2 under paragraph "Optimization & Pre-training". ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Yes, Section 4.2 under paragraphs "Model architecture" and "Optimization & Pre-training" for the GPUs used and number of parameters. End of the Section 9 for the total computational budget and infrastructure used, since it's one of the limitation of this article. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4.2 under paragraph "Optimization & Pre-training". ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We are averaging 4 runs for all the experiments presented in the Tables 6, 7 and 9. We also provide the Figure 2 in the Appendix to show the variability between the 4 runs of 4 of the tasks thanks to a box plot. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We have used hundreds of RegEx rules specific to each data sources to clean both corpus. It's not relevant for the public models, since the datasets NACHOS, used for pre-training them, will be available online soon after acceptance notification. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
yuan-etal-2023-discriminative
Discriminative Reasoning with Sparse Event Representation for Document-level Event-Event Relation Extraction
https://aclanthology.org/2023.acl-long.897
Document-level Event Causality Identification (DECI) aims to extract causal relations between events in a document. It challenges conventional sentence-level task (SECI) with difficult long-text understanding. In this paper, we propose a novel DECI model (SENDIR) for better document-level reasoning. Different from existing works that build an event graph via linguistic tools, SENDIR does not require any prior knowledge. The basic idea is to discriminate event pairs in the same sentence or span multiple sentences by assuming their different information density: 1) low density in the document suggests sparse attention to skip irrelevant information. Our module 1 designs various types of attention for event representation learning to capture long-distance dependence. 2) High density in a sentence makes SECI relatively easy. Module 2 uses different weights to highlight the roles and contributions of intra- and inter-sentential reasoning, which introduces supportive event pairs for joint modeling. Extensive experiments demonstrate great improvements in SENDIR and the effectiveness of various sparse attention for document-level representations. Codes will be released later.
# Discriminative Reasoning With Sparse Event Representation For Document-Level Event-Event Relation Extraction Changsen Yuan1**, Heyan Huang**1,∗ , Yixin Cao2**, Yonggang Wen**3 1Beijing Institute of Technology, Beijing, China 2Singapore Management University, Singapore 3Nanyang Technological University, Singapore {yuanchangsen, hhy63}@bit.edu.cn caoyixin2011@gmail.com, ygwen@ntu.edu.sg ## Abstract Document-level Event-Event Relation Extraction (DERE) aims to extract relations between events in a document. It challenges conventional sentence-level task (SERE) with difficult long-text understanding. In this paper, we propose a novel DERE model (SENDIR) for better document-level reasoning. Different from existing works that build an event graph via linguistic tools, SENDIR does not require any prior knowledge. The basic idea is to discriminate event pairs in the same sentence or span multiple sentences by assuming their different information density: 1) low density in the document suggests sparse attention to skip irrelevant information. Our module 1 designs various types of attention for event representation learning to capture long-distance dependence. 2) High density in a sentence makes SERE relatively easy. Module 2 uses different weights to highlight the roles and contributions of intra- and intersentential reasoning, which introduces supportive event pairs for joint modeling. Extensive experiments demonstrate great improvements in SENDIR and the effectiveness of various sparse attention for document-level representations. Codes will be released later. ## 1 Introduction Event-Event Relation Extraction (ERE) is the task of identifying the relation between two events in texts. As shown in Figure 1, for any pair of events1, e.g., (Services, *downtime*), it shall make the classifications for which relation type it holds. Clearly, event pairs may be in the same sentence (SERE) (Kadowaki et al., 2019a; Liu et al., 2020; Kadowaki et al., 2019b), or scattered across the entire document (DERE) (Phu and Nguyen, 2021). In practice, DERE can benefit a wider range of applications, such as knowledge graph construction (Chen et al., 2019) and future event forecasting ∗Corresponding author 1Event is defined as the trigger word in this area. ![0_image_0.png](0_image_0.png) Figure 1: Example of document-level ERE from EventStoryLine. Solid lines denote causal relations; some events and relations are omitted for clarity. The red boxes are indicator words for causal relations. (Hashimoto, 2019) but it remains challenging due to the difficulty in long-text understanding. In this paper, we propose to improve the reasoning ability among events spanning the entire document for DERE. Different from conventional methods, which build an event graph based on linguistic tools (Phu and Nguyen, 2021; Gao et al., 2019; Xu et al., 2023; Zeng et al., 2021), we focus more on the nature of document itself and do not rely on any prior knowledge. To do this, we highlight the following key questions: - How to capture events' dependence that may be far away? - Should we treat all event pairs equally considering the essential difference between SERE and DERE? To address them, we propose a novel DERE model that learns Sparse EveNt representations for Discriminating Intra- and intersentential Reasoning, namely **SENDIR**. Inspired by MAE (He et al., 2022), we observe a different information density between sentences and documents - for an event, most parts of the document are irrelevant, leading to a low information density. By contrast, the sentence has a high density and 16222 usually contains related words as causal indicators. As shown in Figure 1, *problem*3 causal −−−−→ *damage*3 in the third sentence has clear causal word (due to) 2, making the prediction much easier. While, for problem3*causal* −−−−→ *damage*4 across the third and fourth sentences, there is no such pattern. This motivates us the design two modules as follows. The goal of module 1 is to shorten the dependence distance among events to learn high-quality local and global context representations. The basic idea is to learn event-specific sentence embeddings as local features. Based on that, we further utilize sparse self-attention globally to skip irrelevant information. We have defined various types of attention masks to reflect specific dependence among sentences. In addition to conventional random sparse attention (Tay et al., 2021), we have also explored Narrative, Flashback, Global→, Global←, and Banded attention, to reflect specific language bias according to human writing habits. We name this module sparse event representation learning, while these sparse attentions shall also benefit other long-text understanding tasks. Module 2 aims to discriminate intra- and intersentential reasoning to help difficult cross-sentence events with relatively easy within-sentence-level events. As shown in Figure 1, it is easy to predict (*problem*3 causal −−−−→ *damage*3) with high confidence, which is part of the path *problem*3 causal −−−−→ damage3*causal* −−−−→ affected4*causal* −−−−→ *damage*4 for prediction of (problem3*causal* −−−−→ *damage*4). Thus, for each event pair, we take the outputs of module 1 as intra-sentential features. We then enhance them with selected supportive event pairs to constitute possible reasoning chain, and utilize Gated Attention Unit (GAU) (Hua et al., 2022) to conduct inter-sentential reasoning. Finally, we combine these two types of features with varying weights to differentiate their confidence and roles in ERE. Thus, we can improve the prediction of event pairs across sentences without hurting the performance of the pairs within a sentence. We summarize the contributions as follows: - We propose to discriminate intra- and intersentential reasoning considering the essential difference between SERE and DERE. - We propose a novel DERE model SENDIR without any prior knowledge or external tools. 2We use subscript to denote the sentence index. - Experimental results on three public datasets demonstrate the effectiveness of SENDIR. Further studies also verify our proposed sparse attentions and discriminative reasoning in long-text understanding. ## 2 Related Work 2.1 Sentence-Level Ere Early ERE methods focus on SERE and exploit various textual features to represent the relation, such as syntactic features (Venkatachalam et al., 2021; Ning et al., 2019), causal patterns (Riaz and Girju, 2010; Hidey and McKeown, 2016), and statistical features of causal information (Mirza et al., 2014; Hu et al., 2017; Tan et al., 2022). Following the success of pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019), recent works tend to enhance PLMs with external knowledge, so that SERE models can obtain high-quality contextualized event representations for classification. Hashimoto (2019) exploited the cause and effect entities in Wikipedia and the multilingual inter-wiki links as weak supervision. Zuo et al. (2021a) introduced external causal statements and adapted a contrastive transfer strategy to incorporate them into a target model. Cao et al. (2021) utilized ConceptNet (Speer et al., 2017) to learn latent structure of event causal relation, and Zuo et al. (2021b, 2020) designed a knowledge-guided method to generate new samples based on several knowledge sources, such as WordNet (Miller, 1998) and VerbNet (Schuler, 2005). Although the SERE has achieved great success, events usually scatter the entire document in real scenarios. Therefore, DERE has attracted more and more research attention. ## 2.2 Document-Level Ere Compared with SERE, DERE has a wider range of applications but is more challenging due to the difficult long-text understanding. Thus, researchers tend to consider event-event structures for global reasoning. Gao et al. (2019) used integer linear programming (ILP) to model causal information by designing constraints and modifying the objective function to encourage causal structures and discourage the opposite. To build the connections among the events, Phu and Nguyen (2021) designed various document-level graphs and used graph convolutional networks to learn structure-preserved features. Chen et al. (2022) proposed to build an event pair relational graph and converted DERE to ![2_image_0.png](2_image_0.png) the node classification, which captures the global interactions and alleviates the spurious correlation between events. Man et al. (2022) proposes to model the important context sentences to identify relations. However, to capture long-distance information, these approaches typically construct an additional document-level graph to assist global reasoning. The graph introduces unnecessary noise and decreases efficiency. Instead, we explore the information in documents without requiring prior knowledge or external tools. Certainly, our method can be further improved by incorporating structural knowledge. We leave it in the future. ## 3 Methodology SENDIR aims to learn high-quality event representations to facilitate both intra- and inter-sentential reasoning. As shown in Figure 2, our framework has four main components: **Encoder** to encode a document into vector, **Sparse Event Representation Learning** (SER) that further learns event representations based on document embeddings, Discriminating Intra- and inter sentential Reasoning (DIR) that conducts joint inference based on each pair of event representations, and **Classification** to make final predictions. ## 3.1 Encoder We utilize the BERT (Devlin et al., 2019) with BiLSTM to encode the document for long documents (more than 512 tokens). Given a document D with n sentences and N events, D = [X1, X2*, . . . , X*n], and the sentence (Xi = [x1, x2*, . . . , x*l]) contains l words, the encoder is expressed as follows: $$\begin{array}{l}\mbox{\bf H}=\mbox{Bi-LSTM}([\mathbf{s}_{1},\ldots,\mathbf{s}_{i},\ldots,\mathbf{s}_{n}])\\ \mathbf{s}_{i}=\mbox{BERT}(\mathbf{X}_{i})\end{array},\tag{1}$$ $\blacksquare$ where H = [h1, h2*, . . . ,* hn∗l] is the embedding of token, and hi ∈ R d. For event ei,p, where i denotes i th event and p denotes the index of sentence, we define its embeddings as ei,p = hk, if the event mention word is xk, the position of the event in the document is k. ## 3.2 Sparse Event Representation Learning SER explores various types of attention to capture long-distance dependence between sentences for high-quality document representation, which will be used to enhance the event representation. In specific, SER first learns event-specific sentence embedding as the local context. Based on them, we then apply sparse self-attention to skip irrelevant information as global contexts. Particularly, we introduce various types of long-distance dependency assumptions. Finally, we define event representations based on local and global contexts. We highlight the following differences and advantages of SER from the previous document representation: (1) The global contexts taking sentences as nodes shortens the token-level distance between events. (2) Well-designed sparse attention mechanism will bring useful language bias that further alleviates the difficulty of modeling long-distance dependence. Event-specific Sentence Embedding. Given event and its sentence embeddings (i.e., ei,p and sp), we use the event as a query and compute event-specific sentence embedding ci as follows: $$\left\{\begin{array}{l}{{{\mathbf{w}}_{i}=\mathrm{Softmax}({\mathbf{s}}_{p}{\mathbf{e}}_{i,p})}}\\ {{{\mathbf{c}}_{i}={\mathbf{w}}_{i}{\mathbf{s}}_{p}}}\end{array}\right.,$$ where ciis defined as local context of event ei,p, and wiis the trainable parameters. Sparse Attention Mask. We apply selfattention technique to fuse information from the above event-specific sentence embeddings C1 = [c1, c2*, . . . ,* cN ] ∈ R N×das the global context. Note that N is the number of events rather than sentences, because each event has its own sentence embedding, even if two events are located in the same sentence. Inspired by Child et al. (2019), we design the sparse mask matrix to deal with the long-distance dependency issue. We have designed six types of masks: Global→, Global←, Random, Banded, Narrative, and Flashback. Note that the assumptions behind them are not always true and we will give a discussion later. Their intuitive impression is shown in Figure 2. Formally, we define the mask matrix as G ∈ R N×N , where each element Gi,j ∈ {0, 1} denotes if the information of the j th event in the q th sentence ej,q can be seen by the i th event in the p th sentence ei,p. We define the following masks: - **Global**→. We assume the events in the first several sentences (e.g., the first two sentences) are core topics of the document and should see all of the other events: $$G_{i,j}=\left\{\begin{array}{l l}{{1,}}&{{i f\;p<3,\;o r\;q<3}}\\ {{0,}}&{{o t h e r w i s e}}\end{array}\right..$$ - **Global**←. We assume the events in the last several sentences (e.g., the last two sentences) are conclusion topics of the document and should see all of the other events: $$G_{i,j}=\left\{\begin{array}{l l}{{1,}}&{{i f\ p>N-3,\ o r\ q>N-3}}\\ {{0,}}&{{o t h e r w i s e}}\end{array}\right.\tag{4}$$ - **Random.** Random sparse masks are usually used to increase the ability of non-local interactions—we randomly sample 20% matrix element as 0, and others are 1. - **Banded.** We assume that the related information is narrowed down into neighbor sentences only. That is, each event can only see the events in neighbor sentences: $$G_{i,j}=\left\{\begin{array}{l l}{{1,}}&{{i f\ |p-\ q|<3}}\\ {{0,}}&{{o t h e r w i s e}}\end{array}\right.\ .$$ $$({\mathfrak{H}})$$ $$(2)$$ - **Narrative.** We assume that the events are mostly described in narrative order, so that the former event can see the latter one: $${\bf G}_{i,j}=\left\{\begin{array}{l l}{{1,}}&{{i f\ q-p>0}}\\ {{0,}}&{{o t h e r w i s e}}\end{array}\right.\ .$$ $$(6)$$ - **Flashback.** We assume that events are sequentially written, and thus the latter one should see the former one: $${\bf G}_{i,j}=\left\{\begin{array}{l l}{{1,}}&{{i f\ p-q>0}}\\ {{0,}}&{{o t h e r w i s e}}\end{array}\right..$$ $$\quad(7)$$ Discussion of sparsity assumptions. Due to the complexity of language, we have designed the above six types of attention masks to capture different linguistic biases. Although these masks have patterns suitable for specific settings, they also have unsuitable cases where the underlying assumptions shall fail. However, all of them are designed based on our core assumption - the information density in documents is lower than that in sentences. Thus, we capture long-distance dependence via the sparse attention mechanism. In the experiment (Section 4.7), we demonstrate that even the random attention mask can improve document-level performance, and other types of attention masks (e.g., Narrative) have achieved further improvements by introducing additional language bias. Event Representation. Given local context C and attention mask G, we now use selfattention (Vaswani et al., 2017) to obtain global context, which is further combined with local context as event representations. The global context C0 = [c 0 1 , c 0 2 , . . . , c 0 N ] ∈ R N×dcan be computed as follows: C0 = αC (8) $$(3)$$ where α is the sparse self-attention matrix and each element is defined as follows: $$\left\{\begin{array}{l l}{{a t t_{i,j}=\frac{(\mathbf{c}_{i}\mathbf{W}_{i})(\mathbf{c}_{j}\mathbf{W}_{j})^{T}}{\sqrt{d}}*G_{i,j}}}\\ {{\alpha_{i,j}=\frac{e x p(a t t_{i,j})}{\sum_{z\in N-1}e x p(a t t_{i,z})}}}\end{array},\right.\tag{9}$$ where d is the dimension of hidden states for scaling, Wi and Wj are the trainable parameters. To capture different representation subspaces, we also use multi-head attention (Vaswani et al., 2017). Now, we define the sparse contextualized event representation as follows: $$\mathbf{e}_{i}^{'}=\mathrm{ReLU}([\mathbf{e}_{i},\mathbf{c}_{i},\mathbf{c}_{i}^{'}]\mathbf{W}_{e}),\qquad(10)$$ where We ∈ R 3d×dis the trainable parameters, and e 0 i ∈ R d. Given each pair of events (ei, ej ), we define their representation as follows3: $$\mathbf{v}_{i,j}=\mathrm{ReLU}([\mathbf{e}_{i}^{\prime}+\mathbf{e}_{j}^{\prime},|\mathbf{e}_{i}^{\prime}-\mathbf{e}_{j}^{\prime}|]\mathbf{W}_{c}),\quad(11)$$ where Wc ∈ R 2d×dis the trained parameters. ## 3.3 **Discriminating Intra- And Inter-Sentential** Reasoning Section 3.2 defines event pair representations based on local and global contexts. In this section, DIR takes them as intra-sentential features, indicating that they have not considered the event pairs in other sentences to form a reasoning chain. To further obtain inter-sentential features for each pair of events, we first select supportive event pairs for each event pair and use GAU (Hua et al., 2022) for information fusion. Then, we combine two types of features with different weights to differentiate two types of reasoning - the relatively easy intrasentential tasks can help cross-sentence relation identification without the loss of performance. First, instead of using all event pairs as supports, we assume that only the pairs sharing at least one common event can contribute to the reasoning chain. For example, given four events (a, b, c, d) to predict the relation between (a, c), the event pairs (a, b) and (b, c) are clearly related, but (b, d) is clearly irrelevant. By stacking more layers, our proposed model can implicitly deal with longer reasoning chains, as any length of chains can be decomposed into several shorter chains. Take the chain a −→ b −→ c −→ d for predicting event pair (a, d) as an example, we indeed will use a −→ c and c −→ d as supportive evidence, while a −→ c shall be supported as illustrated above. This is, the longer chain has been decomposed into two sub-chains, modeled using multiple layers, and optimized jointly. Based on the assumption, we build a set of supportive event pairs for query (ei, ej ), T1 = [vi,j , vi,1, . . . , vN,j ]. Note that we include the representations of query event pair vi,j . Next, we utilize GAU to conduct reasoning over the supportive set as follows: $$\left\{\begin{array}{l l}{{\bf T}^{2}=({\bf U}\odot{\bf AV}){\bf W}_{o}}\\ {{\bf U}={\bf T}^{1}{\bf W}_{u},{\bf V}={\bf T}^{1}{\bf W}_{v},{\bf Z}={\bf T}^{1}{\bf W}_{z}}\\ {{\bf A}=({\rm ReLU}(({\bf Z}{\bf W}_{q})({\bf Z}{\bf W}_{k})^{T}+b))^{2}}\end{array}\right.,\tag{12}$$ where ${\bf W}_{o},{\bf W}_{u},{\bf W}_{v},{\bf W}_{z},{\bf W}_{q},{\bf W}_{k}$, and $b$ are the trainable parameters, and ${\bf T}^{2}=[{\mathbf{v}}_{i,j}^{\prime},{\mathbf{v}}_{i,1}^{\prime},\ldots,{\mathbf{v}}_{N,j}^{\prime}]$ are the output event pair representations enhanced by reasoning chains. We take v 0 i,j as the inter-sentential reasoning features of query (ei, ej ). Now, we are to combine two types of features with different weights. The basic idea is that event pairs within the same sentence is relatively easy to predict with high confidence, e.g., causal indicator words (e.g., due to or *lead to*) in Figure 1 provide clear patterns. We thus leverage intra-sentential features to facilitate the event pairs spanning different sentences. To avoid the harm of easier predictions, we assign higher weights to intra-sentential features if the event pair is within the same sentence. By contrast, we assign higher weights to inter-sentential features for events from different sentences to highlight the inter-sentential reasoning. Finally, the query event pair representations for relation between (ei, ej ) are defined as follows: $$\mathbf{\omega}=\mathbf{\omega}$$ $$\mathbf{e}:\left\{\begin{array}{l}\mathbf{v}_{i,j}+\beta_{1}\mathbf{v}_{i,j}^{\prime},\quad if\ p-q<0(\underrightarrow{inter})\\ \mathbf{v}_{i,j}+\beta_{2}\mathbf{v}_{i,j}^{\prime},\quad if\ p-q=0(\underrightarrow{intra})\\ \mathbf{v}_{i,j}+\beta_{3}\mathbf{v}_{i,j},\quad if\ p-q>0(\underrightarrow{inter})\end{array}\right.,\tag{13}$$ $\mathbf{e}\ \beta_{1}\ \beta_{2}\ \beta_{3}$ are the weights that highlight dif where β1, β2, β3 are the weights that highlight different type of features for different event distribution. We determine these hyper-parameters by the heuristic experiments. Note that we separate the two cases of events in different sentences: −−−→ *inter* and ←−−− *inter*. −−−→ *inter* indicates that the event pairs are located in narrative order, while ←−−− *inter* denotes a flashback order. The separation considers the different attention assumptions in Section 3.2, which captures various language biases. In experiments, we indeed found such a bias that ←−−− *inter* has a high probability of negative relation, we thus set β3 = 0 and β1 = 0.8, β2 = 0.2 heuristically, but this may be varying in different scenarios. ## 3.4 Classification Given the final representation of an event pair, we use the linear function to predict relations as follows: | Model (%) | Intra-sentence | Inter-sentence | Intra+Inter | | | | | | | |--------------------------------|------------------|------------------|---------------|------|------|------|------|------|------| | P | R | F1 | P | R | F1 | P | R | F1 | | | LSIN (Cao et al., 2021) | 47.9 | 58.1 | 52.5 | - | - | - | - | - | - | | KnowDis (Zuo et al., 2020) | 39.7 | 66.5 | 49.7 | - | - | - | - | - | - | | LR+ (Gao et al., 2019) | 37.0 | 45.2 | 40.7 | 25.2 | 48.1 | 33.1 | 27.9 | 47.2 | 35.1 | | LIP (Gao et al., 2019) | 38.8 | 52.4 | 44.6 | 35.1 | 48.2 | 40.6 | 36.2 | 49.5 | 41.9 | | KMMG (Liu et al., 2020) | 41.9 | 62.5 | 50.1 | - | - | - | - | - | - | | LearnDA (Zuo et al., 2021b) | 42.2 | 69.8 | 52.6 | - | - | - | - | - | - | | RichGCN (Phu and Nguyen, 2021) | 49.2 | 63.0 | 55.2 | 39.2 | 45.7 | 42.2 | 42.6 | 51.3 | 46.6 | | ERGO (Chen et al., 2022) | 49.7 | 72.6 | 59.0 | 43.2 | 48.8 | 45.8 | 46.3 | 50.1 | 48.1 | | SENDIR | 65.8 | 66.7 | 66.2 | 33.0 | 90.0 | 48.3 | 37.8 | 82.8 | 51.9 | Table 1: Main results on the EventStoryLine. The best results are in **bold** and the second-best results are underlined. Intra-sentence denotes that the event pair is in the same sentence, and inter-sentence denotes that the event pair is in different sentences. $${\mathbf{V}}+b),$$ ## Yˆ = Σ(Ow + B), (14) where W and b are the trainable parameters, yˆ is the probability of being positive, and σ is an activation function. For training, we adopt crossentropy as the loss function: L = −Pei,ej (1 − y) log(1 − yˆ) + y log(ˆy) (y is the golden). We use dropout to prevent overfitting. ## 4 Experiments 4.1 Datasets And Metrics To demonstrate the performance of our model, we evaluate the model on two domains three datasets. EventStoryLine4(Mostafazadeh et al., 2016) and Causal-TimeBank5(Mirza and Tonelli, 2014) are event causal relation extraction (RE) dataset, and MATRES6(Ning et al., 2018) is event temporal RE dataset. And we use Precision (P), Recall (R), and F1-score (F1) as evaluation metrics. EventStoryLine annotates 258 documents, 22 topics, 4, 316 sentences, 5, 334 event mentions, 7, 805 intra-sentential event pairs, and 46, 521 intersentential event pairs. Following (Gao et al., 2019), we put them in order based on their topic IDs. **Causal-TimeBank** (Causal-TB) annotates 184 documents, 6, 813 events, and 7, 608 event pairs. **MATRES** annotates 275 documents for four temporal relations, i.e., BEFORE, AFTER, EQUAL, and VAGUE. ## 4.2 Parameter Settings $$(14)$$ We choose the most widely used BERT-base as basic PLMs, to avoid exhaustive parameter tuning. The learning rate is set to 1e−5for pretraining and 1e−4for others. We optimize our model with AdamW. We conduct the grid search to tune hyperparameters: the size of embedding is in {64; 128; 256; 512; 768}, where bold font denotes the best setup. The batch size for pre-trained model is set to 2. The dropout rate is set to 0.4. ## 4.3 Baseline We compare SENDIR with state-of-the-art methods for Event Causal RE and Event Temporal RE. Event Causal RE. (1) **KMMG** (Liu et al., 2020) that proposes a mention masking generalization and use external knowledge databases; (2) KnowDis (Zuo et al., 2020) that investigates a data augmentation to solve the data lacking; (3) LSIN (Cao et al., 2021) that employ ConceptNet to capture the latent causal relational structure; (4) LearnDA (Zuo et al., 2021b) that augments data to solve the data lacking; (5) LR+ and LIP (Gao et al., 2019) that models rich causal structures via designing constraints and objection function; (6) RichGCN (Phu and Nguyen, 2021) that builds multi-level graphs to capture structure-preserved features; (6) **ERGO** (Chen et al., 2022) that designs an event relational graph and converts the event causal identify to a node classification framework. Event Temporal RE. (1) **DEER** (Han et al., 2020) that constructs many training samples to simulate the machine reading comprehension for event temporal understanding; (2) **SMTL** (Ballesteros et al., Model (%) **P R F1** KnowDis 36.6 55.6 44.1 RichGCN 39.7 56.5 46.7 LearnDA 41.9 **68.0** 51.9 ERGO 58.4 60.5 59.4 SENDIR **65.2** 57.7 **61.2** DEER - - 79.3 SMTL - - 81.6 TIMERS 81.1 84.6 82.3 SCS-EERE 78.8 **88.5 83.4** SENDIR **81.2** 85.6 83.3 | Causal-TB MATRES | |--------------------| 2020) that extract the important text based on event pairs to identify relations. (3) **TIMERS** (Mathur et al., 2021) that uses rhetorical discourse features, temporal arguments, and syntactic features to extract the relation information. (4) **SCS-EERE** (Man et al., 2022) that seeks to identify the most important context sentences to identify the temporal relation. ## 4.4 Overall Performance Table 1 and 2 show the overall performance on EventStoryLine, Causal-TB, and MATRES, respectively. We can see that: (1) SENDIR achieves better F1 scores on EventStoryLine and CausalTB, it also has a competitive result on MATRES, which demonstrates the effectiveness and generalization ability of our model. (2) On the MATRES, SENDIR is slightly lower than SCS-EERE. Because event temporal RE is particularly sensitive to direction between events. (3) All models perform better on intra-sentence than inter-sentence in Table 1. This is consistent with our claim that intra-sentence is easier to identify. (4) Particularly, SENDIR has much higher precision on intrasentence. Because the discriminative reasoning scheme alleviates the negative impacts of more difficult cross-sentence reasoning. (5) On intersentence setting, the improvements are mainly from higher recall. We attribute this to the enhanced long-distance modeling ability and the supportive query set - it tends to find relation clues from broader contexts and other event pairs. ## 4.5 Ablation Study To further analyze SENDIR, we also conduct an ablation analysis to illustrate the effectiveness of our main modules. We show the results of the Model (F1 %) **Intra Inter Intra+Inter** SENDIR 66.2 **48.3 51.9** w/o SER 60.9 46.1 49.5 w/o **Sparse Att** 61.6 48.1 50.9 w/o Event Repre **66.9** 46.2 50.2 w/o DIR 64.0 44.9 48.5 w/o **Selection** 66.0 46.5 50.5 w/o **Weight** (β) 63.9 45.7 49.3 ablation study in Table 3 7. SER. We examined the impacts of SER in Table 3. w/o Sparse Att, **w/o Event Repre**, and w/o SER denote that we gradually remove the key designs in Section 3.2: remove sparse attention mask, use local context only as event representations, and remove the entire SER module. (1) w/o SER. The performance becomes sharply poor without SER, especially on intra-sentence. Specifically, the experimental results are reduced by 5.3%/2.2%/2.4% F1 on intra-sentence, intersentence, and intra+inter. This demonstrates that SER can capture high-quality document representation, and intra-sentence event pairs rely more on high quality event representation, so intra-sentence drops more. **(2) w/o Sparse Att.** The experimental results are reduced. Specifically, the sparse attention mask has a more significant impact on intrasentence, and this sparsity is the key to improving the quality of the document representation. We will do further analysis of multiple attention masks in Section 4.7. **(3) w/o Event Repre.** The main performance is the decline in inter-sentence, while intra-sentence is even slightly up. We attribute the reason that the inter-sentence relies more on global context and the intra-sentence relies more on local context. Moreover, the number of inter-sentence is much more than intra-sentence. Therefore, when inter-sentence drops, results of inter+inter drops, even though intra-sentence arises. DIR. In Table 3, we can also observe the following insights. Note that w/o DIR, **w/o Selection**, and w/o Weight denote the deletion of our entire DIR module, taking all other event pairs as supports, and regarding intra- and inter-sentential reasoning the same by using the same weight 1. **(1) w/o DIR.** Compared with SER, removed DIR mainly brings a 7Intra and inter indicate intra-sentence and inter-sentence setting, respectively. ![7_image_1.png](7_image_1.png) 45 50 55 60 65 70 ![7_image_0.png](7_image_0.png) decrease on inter-sentence, which is consistent with the method's motivation and validates the DIR's effectiveness. **(2) w/o Selection.** The experimental results are reduced. The major reason is that considering all event pairs as candidates will increase the noisy information. **(3) w/o Weight.** The drop on intra-sentence is very significant. Because of the lack of distinction in cross-sentence reasoning, the over-reasoning of simple event pairs leads to poor results. And using different weights facilitates simple tasks and global enhancements. We will follow up with a detailed discussion in Section 4.6. ## 4.6 Effects Of Weights As shown in Figure 3, we show the performance of Intra+Inter with different values for β1 ( −−−→ *inter*) and β2 (*intra*). The darker the color, the higher the F1 value. We did not include β3 due to the serious bias in datasets (β3 = 0 always leads to better results), which makes further investigation trivial. The darker the color, the higher the F1 value. We can find that: (1) The scores in the bottom right part is better than in other parts, where the weight of inter-sentence increases and the weight of intra-sentence decreases. This agrees with our assumption that inter-sentential event pairs rely more on cross-sentence reasoning than intra-sentential event pairs. (2) In general, the performance of the right part is better than the left. There are two reasons: first, as the weight of inter-sentence increases, we highlight more cross-sentence reasoning, which improves the performance of intersentence; second, there are far more inter-sentence than intra-sentence, and the improvements of intersentential results will significantly improve the overall experimental performance. (3) The subdiagonal elements indicate that intra-sentence and inter-sentence have the same weight. Clearly, the performance is unsatisfactory. The reason is that inter-sentential event pairs are usually difficult to predict and have lower confidence than intrasentential event pairs. Thus, treating them equally negatively impacts the easier sentence-level tasks. ## 4.7 Effects Of Sparse Attention Mask To investigate the impacts of various sparse attention masks on the SER, we report the results using different sparse attention masks: Narrative, Flashback, Global→, Global←, Random, and Banded. From the Figure 4, we can find that: (1) On intrasentence, these sparse attention masks have similar results except for Global→. This result is consistent with the previous results that event pairs rely more on local contexts rather than long-distance dependence on global contexts. (2) Random is unexpectedly good, indicating there is numerous redundant information in the document, and the sparse mask matrix can mitigate the effect of noise. (3) Narrative has achieved the best performance, which reflects a language bias from human writing habits - always talk about the main topic at first. ## 4.8 Case Study As shown in Figure 5, we present a case study to better understand the effect of SENDIR compared to the baseline model (i.e., BERT). We can notice that: BERT and SENDIR can identify the intra-sentential event pair (dead *causal* −−−−→6.1 *magnitude quake*) and some easy inter-sentential event pairs (dead *causal* −−−−→6.1 *magnitude earthquake* and 6.1 magnitude quake *causal* −−−−→ *rescue*). However, for some complex event pairs that span multiple sentences, SENDIR can employ DIR to infer [0] Indonesia earthquake: 24 **dead** and over 200 injured as 6.1 magnitude **quake** hits Aceh province … [2] Soldiers and police are leading **rescue** operations in Indonesia after a 6.1 **magnitude** earthquake hit Aceh province leaving 24 dead and over 200 injured. [3] The **quake** struck yesterday afternoon …[4] Twelve people were killed and 70 others were injured … [5] … 10 people were killed, | killed and 70 others were injured … [5] … 10 people were killed, 140 were injured and about 1500 houses and buildings … Event Pair GT BERT SENDIR dead 0, 6 . 1 magnitude quake 0 YES dead 0, quake 3 YES dead 0, 6 . 1 magnitude earthquake 2 YES 6 . 1 magnitude quake 0, rescue 2 YES 6 . 1 magnitude quake 0, killed 4 YES 6 . 1 magnitude quake 0, injured 5 YES … … … … | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Figure 5: The case study of our proposed SENDIR and BERT models on EventStoryLine, where GT denotes ground truth. Red numbers are the sentence numbers. causal relations, but BERT fails, such as *dead* causal −−−−→ *quake* and 6.1 magnitude quake *causal* −−−−→ injured. These demonstrate the significant advantages of SENDIR in dealing with inter-sentential event pairs to identify hard-to-identify causal relations by cross-sentence reasoning. ## 5 Conclusion In this paper, we exploit a novel Discriminative Reasoning with Sparse Event Representation for DERE. It can learn high-quality event representation and facilitate inter-sentential reasoning for document-level understanding. Experimental results show that our method is effective and significantly better than competitive baselines, improving inter-sentence cases without harming intrasentence event pairs. The extensive analysis also provides interesting insights about various language biases for sparse long-text representation learning. In the future, we will combine various sparse assumptions for high-quality document representations and incorporate graph reasoning. ## 6 Limitation The limitations of SENDIR include the following two points: (1) It has not extended to documentlevel entity-centric relations tasks. Our work is event-centric, and future work extends it with entity-centric cases. Document-level entity-centric RE needs to consider multiple mentions of an entity and different relations in different directions of the same entity pair. (2) It does not bring in external commonsense knowledge. Knowledge can be used to enrich events and improve the accurate ERE. ## Acknowledgements We appreciate the comments from anonymous reviewers which will help further improve our work.This study is supported by National Natural Science Foundation of China (No.U19B2020), Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant, and Beijing Institute of Technology Southeast Academy of Information Technology. ## References Miguel Ballesteros, Rishita Anubhai, Shuai Wang, Nima Pourdamghani, Yogarshi Vyas, Jie Ma, Parminder Bhatia, Kathleen R. McKeown, and Yaser Al-Onaizan. 2020. Severing the edge between before and after: Neural architectures for temporal ordering of events. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5412–5417. Association for Computational Linguistics. 6-0-dead 35-0-6.1. 13-3-quake 22-6-quake 7-2-6.1 6-35 6-13 6-22 6-7 35-0-6.1. 15-4-killed 33-2-rescue 17-5-killed 29-6-lanslides 8-2-dead 18-5-injured 23-6-evacuated 19-5-damage 12-2-injured Pengfei Cao, Xinyu Zuo, Yubo Chen, Kang Liu, Jun Zhao, Yuguang Chen, and Weihua Peng. 2021. Knowledge-enriched event causality identification via latent structure induction networks. In *Proceedings of ACL/IJCNLP*, pages 4862–4872. Association for Computational Linguistics. Meiqi Chen, Yixin Cao, Kunquan Deng, Mukai Li, Kun Wang, Jing Shao, and Yan Zhang. 2022. ERGO: event relational graph transformer for document-level event causality identification. *CoRR*, abs/2204.07434. Zi-Yuan Chen, Chih-Hung Chang, Yi-Pei Chen, Jijnasa Nayak, and Lun-Wei Ku. 2019. Uhop: An unrestricted-hop relation extraction framework for knowledge-based question answering. In *Proceedings of NAACL-HLT*, pages 345–356. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. *CoRR*, abs/1904.10509. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pages 4171–4186. Association for Computational Linguistics. Lei Gao, Prafulla Kumar Choubey, and Ruihong Huang. 2019. Modeling document-level causal structures for event causal relation identification. In *Proceedings* of NAACL-HLT, pages 1808–1817. Association for Computational Linguistics. Rujun Han, Xiang Ren, and Nanyun Peng. 2020. DEER: A data efficient language model for event temporal reasoning. *CoRR*, abs/2012.15283. Chikara Hashimoto. 2019. Weakly supervised multilingual causality extraction from wikipedia. In *Proceedings of EMNLP-IJCNLP*, pages 2986–2997. Association for Computational Linguistics. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross B. Girshick. 2022. Masked autoencoders are scalable vision learners. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 15979–15988. IEEE. Christopher Hidey and Kathy McKeown. 2016. Identifying causal relations using parallel wikipedia articles. In *Proceedings of ACL*. The Association for Computer Linguistics. Zhichao Hu, Elahe Rahimtoroghi, and Marilyn A. Walker. 2017. Inference of fine-grained event causality from blogs and films. In Proceedings of the Events and Stories in the News Workshop@ACL 2017, Vancouver, Canada, August 4, 2017, pages 52–58. Association for Computational Linguistics. Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc V. Le. 2022. Transformer quality in linear time. *CoRR*, abs/2202.10447. Kazuma Kadowaki, Ryu Iida, Kentaro Torisawa, JongHoon Oh, and Julien Kloetzer. 2019a. Event causality recognition exploiting multiple annotators' judgments and background knowledge. In *Proceedings* of EMNLP-IJCNLP, pages 5815–5821. Association for Computational Linguistics. Kazuma Kadowaki, Ryu Iida, Kentaro Torisawa, JongHoon Oh, and Julien Kloetzer. 2019b. Event causality recognition exploiting multiple annotators' judgments and background knowledge. In *Proceedings* of EMNLP-IJCNLP, pages 5816–5822, Hong Kong, China. Association for Computational Linguistics. Jian Liu, Yubo Chen, and Jun Zhao. 2020. Knowledge enhanced event causality identification with mention masking generalizations. In *Proceedings of IJCAI*, pages 3608–3614. ijcai.org. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Hieu Man, Nghia Trung Ngo, Linh Ngo Van, and Thien Huu Nguyen. 2022. Selecting optimal context sentences for event-event relation extraction. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pages 11058– 11066. AAAI Press. Puneet Mathur, Rajiv Jain, Franck Dernoncourt, Vlad I. Morariu, Quan Hung Tran, and Dinesh Manocha. 2021. TIMERS: document-level temporal relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 524–533. Association for Computational Linguistics. George A Miller. 1998. *WordNet: An electronic lexical* database. MIT press. Paramita Mirza, Rachele Sprugnoli, Sara Tonelli, and Manuela Speranza. 2014. Annotating causality in the TempEval-3 corpus. In *Proceedings of EACL)*, pages 10–19, Gothenburg, Sweden. Association for Computational Linguistics. Paramita Mirza and Sara Tonelli. 2014. An analysis of causality between events and its relation to temporal information. In *COLING 2014, 25th International* Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, August 2329, 2014, Dublin, Ireland, pages 2097–2106. ACL. Nasrin Mostafazadeh, Alyson Grealish, Nathanael Chambers, James F. Allen, and Lucy Vanderwende. 2016. Caters: Causal and temporal relation scheme for semantic annotation of event structures. In Proceedings of the Fourth Workshop on Events, EVENTS@HLT-NAACL 2016, San Diego, California, USA, June 17, 2016, pages 51–61. Association for Computational Linguistics. Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019. An improved neural baseline for temporal relation extraction. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 6202– 6208. Association for Computational Linguistics. Qiang Ning, Hao Wu, and Dan Roth. 2018. A multiaxis annotation scheme for event temporal relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1318–1328. Association for Computational Linguistics. Minh Tran Phu and Thien Huu Nguyen. 2021. Graph convolutional networks for event causality identification with rich document-level structures. In *Proceedings of NAACL-HLT*, pages 3480–3490. Association for Computational Linguistics. Mehwish Riaz and Roxana Girju. 2010. Another look at causality: Discovering scenario-specific contingency relationships with no supervision. In Proceedings of ICSC, pages 361–368. IEEE Computer Society. Karin Kipper Schuler. 2005. *VerbNet: A broadcoverage, comprehensive verb lexicon*. University of Pennsylvania. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In *Proceedings of AAAI*, pages 4444– 4451. AAAI Press. Fiona Anting Tan, Ali Hürriyetoglu, Tommaso Caselli, ˘ Nelleke Oostdijk, Tadashi Nomoto, Hansi Hettiarachchi, Iqra Ameer, Onur Uca, Farhana Ferdousi Liza, and Tiancheng Hu. 2022. The causal news corpus: Annotating causal relations in event sentences from news. *arXiv preprint arXiv:2204.11714*. Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. 2021. Synthesizer: Rethinking self-attention for transformer models. In Proceedings of ICML, volume 139 of *Proceedings* of Machine Learning Research, pages 10183–10192. PMLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Kritika Venkatachalam, Raghava Mutharaju, and Sumit Bhatia. 2021. SERC: syntactic and semantic sequence based event relation classification. In 33rd IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2021, Washington, DC, USA, November 1-3, 2021, pages 1316–1321. IEEE. Wang Xu, Kehai Chen, and Tiejun Zhao. 2023. Document-level relation extraction with path reasoning. *ACM Trans. Asian Low Resour. Lang. Inf. Process.*, 22(4):1–14. Shuang Zeng, Yuting Wu, and Baobao Chang. 2021. SIRE: separate intra- and inter-sentential reasoning for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6,* 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 524–534. Association for Computational Linguistics. Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, and Yuguang Chen. 2021a. Improving event causality identification via selfsupervised representation learning on external causal statement. In *Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online* Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of *Findings of ACL*, pages 2162–2172. Association for Computational Linguistics. Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng, and Yuguang Chen. 2021b. Learnda: Learnable knowledge-guided data augmentation for event causality identification. In *Proceedings of ACL/IJCNLP*, pages 3558–3571. Association for Computational Linguistics. Xinyu Zuo, Yubo Chen, Kang Liu, and Jun Zhao. 2020. Knowdis: Knowledge enhanced data augmentation for event causality detection via distant supervision. In *Proceedings of COLING*, pages 1544–1550. International Committee on Computational Linguistics. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 6 ✓ A2. Did you discuss any potential risks of your work? Section 5 and Section 6 ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 4.2 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 4 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 4 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
lu-etal-2023-facilitating
Facilitating Fine-grained Detection of {C}hinese Toxic Language: Hierarchical Taxonomy, Resources, and Benchmarks
https://aclanthology.org/2023.acl-long.898
The widespread dissemination of toxic online posts is increasingly damaging to society. However, research on detecting toxic language in Chinese has lagged significantly due to limited datasets. Existing datasets suffer from a lack of fine-grained annotations, such as the toxic type and expressions with indirect toxicity. These fine-grained annotations are crucial factors for accurately detecting the toxicity of posts involved with lexical knowledge, which has been a challenge for researchers. To tackle this problem, we facilitate the fine-grained detection of Chinese toxic language by building a new dataset with benchmark results. First, we devised Monitor Toxic Frame, a hierarchical taxonomy to analyze the toxic type and expressions. Then, we built a fine-grained dataset ToxiCN, including both direct and indirect toxic samples. ToxiCN is based on an insulting vocabulary containing implicit profanity. We further propose a benchmark model, Toxic Knowledge Enhancement (TKE), by incorporating lexical features to detect toxic language. We demonstrate the usability of ToxiCN and the effectiveness of TKE based on a systematic quantitative and qualitative analysis.
## Facilitating Fine-Grained Detection Of Chinese Toxic Language: Hierarchical Taxonomy, Resources, And Benchmarks Junyu Lu, Bo Xu, Xiaokun Zhang, Changrong Min, Liang Yang∗ , Hongfei Lin School of Computer Science and Technology, Dalian University of Technology, China (dutljy,kun,11909060)@mail.dlut.edu.cn,(xubo,liang,hflin)@dlut.edu.cn ## Abstract Disclaimer*: The samples presented by this paper may be considered offensive or vulgar.* The widespread dissemination of toxic online posts is increasingly damaging to society. However, research on detecting toxic language in Chinese has lagged significantly. Existing datasets lack fine-grained annotation of toxic types and expressions, and ignore the samples with indirect toxicity. In addition, it is crucial to introduce lexical knowledge to detect the toxicity of posts, which has been a challenge for researchers. In this paper, we facilitate the finegrained detection of Chinese toxic language. First, we build MONITOR TOXIC FRAME, a hierarchical taxonomy to analyze toxic types and expressions. Then, a fine-grained dataset TOXICN is presented, including both direct and indirect toxic samples. We also build an insult lexicon containing implicit profanity and propose Toxic Knowledge Enhancement (TKE) as a benchmark, incorporating the lexical feature to detect toxic language. In the experimental stage, we demonstrate the effectiveness of TKE. After that, a systematic quantitative and qualitative analysis of the findings is given. 1 ## 1 Introduction More and more people have acquired information from social media platforms where posts containing toxic language are also rampant. Toxic language is viewed as a rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion (Dixon et al., 2018). Due to its negative impact on individuals and society, toxic language has been rapidly recognized as an increasing concern (Silva et al., 2016). Recently, researchers have used techniques of natural language processing to detect toxic language, making great progress in many languages (AlKhamissi et al., 2022; Mou et al., 2020; Cao and Lee, 2020; Tekiroglu et al., 2020; Founta et al., 2018; Zhou et al., 2021a; Mathew et al., 2021; Caselli et al., 2021; Hanu and Unitary team, 2020). In contrast, the relevant research on Chinese toxic language detection has lagged significantly (Jahan and Oussalah, 2023). There are two key issues that have been overlooked. First, existing studies (Deng et al., 2022; Jiang et al., 2022; Zhou et al., 2022) lack a fine-grained annotation of textual toxic types, resulting in hate speech being conflated with general offensive language. Compared to hate speech, general offensive language does not attack groups with special social attributes, and it is just used to emphasize emotions in many cases (Wang et al., 2014). Like Exp. 1 in Table 1, the insult "*fuck*" can be considered as a modal particle to express surprise. Since there is no equivalence between general offensive language and hate speech, it is crucial to determine their boundary conditions (Davidson et al., 2017). In addition, most studies on toxic Chinese language only concentrate on detecting direct and explicit bias and offense. And they lose sight of indirect expressions including implicit hatred (e.g., stereotype and irony) (ElSherief et al., 2021) and reporting experiences of discrimination (Chiril et al., 2020). Due to the absence of direct swear words, these indirect toxic samples are obviously harder to be filtered (ElSherief et al., 2021). To further illustrate the distinction of several expressions, a few examples are listed in Table 1. Meanwhile, compared to English, Chinese has richer variants of profanity with implicit toxic meaning (Zhang, 2010; Sohn and Lee, 2019), which brings challenges to research on toxic language detection. However, existing insult lexicons fail to cover these terms. An example is "*fairy*" in Exp. 3, which itself is a positive word and is used here to implicitly attack women. Due to the significance of lexical knowledge to detect toxic language (Wiegand et al., 2021; 16235 | Exp . | Post | Toxic Type | Targeted Group | Expression | |---------------------------------------------------------|------------------------------|--------------|------------------|--------------| | 1 | 我靠!我们居然输了。 | Offensive | - | Explicitness | | What the fuck! I can't believe we lost! | | | | | | 2 | 我一看老黑就想吐。 | Hate | Racism | Explicitness | | I feel like throwing up when I look at n*ggas. | | | | | | 3 | 小仙女的事你少管。 | Hate | Sexism | Implicitness | | Keep your nose out of the fairy's business. | | | | | | 4 | 我的朋友说河南人经常偷井盖。 | Hate | Regional Bias | Reporting | | My friend said Henan people often steal manhole covers. | | | | | Table 1: Different categories of toxic comment illustration, including *general offensive language* and each hate expression (explicitness, *implicitness* and *reporting*). Hartvigsen et al., 2022), it is important to construct an insult lexicon containing implicit toxic terms. To fill these gaps, we facilitate fine-grained detection of Chinese toxic language. To distinguish hate speech from general offensive language and analyze the expressions of samples, we first introduce MONITOR TOXIC FRAME, a hierarchical taxonomy. Based on the taxonomy, the posts are progressively divided into diverse granularities as follows: **(I) Whether Toxic, (II) Toxic Type (general offensive language or hate speech), (III) Targeted Group, (IV) Expression Category (explicitness, implicitness, or reporting)**. After taking several measures to alleviate the bias of annotators, we then conduct a fine-grained annotation of posts, including both direct and indirect toxic samples. And TOXICN dataset is presented, which has 12k comments containing sexism, racism, *regional bias*, and *anti-LGBTQ*. For the convenient detection of toxic language, we construct an insult lexicon attacking different targeted groups. It contains not only explicit profanities but also implicit words with toxic meanings, such as ironic metaphors (e.g., "*fairy*"). To exploit the lexical feature, we further present a migratable benchmark of Toxic Knowledge Enhancement (TKE), enriching the text representation. In the evaluation phase, several benchmarks with TKE are utilized to detect toxic language, demonstrating its effectiveness. After that, we analyze the experimental results in detail and offer our suggestions for identifying toxic language. The main contributions of this work are summarized as follows: - We present a hierarchical taxonomy, MONI-TOR TOXIC FRAME, to progressively explore the toxic types and expressions of samples from diverse granularities. a fine-grained dataset of Chinese toxic language. It divides hate speech from offensive language, including samples with not only direct offense but also indirect expressions. - We present an insult lexicon, and design a Toxic Knowledge Enhancement benchmark incorporating the lexical feature. We evaluate its performance at different levels and conduct an exhaustive analysis. ## 2 Related Work Toxic Language Detection. Toxic language detection is a high-profile task in the field of natural language processing. Recently, most researchers have utilized methods of deep learning based on the pre-trained language model to tackle this problem (Mou et al., 2020; Cao and Lee, 2020; Tekiroglu et al., 2020; Mathew et al., 2021; Zhou et al., 2021a; AlKhamissi et al., 2022). Two re-trained BERT (Devlin et al., 2019), HateBERT (Caselli et al., 2021) and ToxicBERT (Hanu and Unitary team, 2020), have been specifically proposed to detect toxic language. Davidson et al. (2017); Founta et al. (2018); Mathew et al. (2021) attempted to distinguish hate speech from offensive language. ElSherief et al. (2021); Hartvigsen et al. (2022) explored the benchmark of implicit and latent hate speech. Chiril et al. (2020); Pérez-Almendros et al. (2020) considered testing for reporting related to hate speech and behavior. And in the construction of the toxic language dataset, some studies focused on how to improve the reliability of the annotation process to mitigate the subjective bias of annotators (Waseem and Hovy, 2016; Ross et al., 2016; Zeinert et al., 2021; Fortuna et al., 2022). Linguistic Research of Chinese Toxic Language. Chinese toxic language has been researched extensively in language studies and sociolinguistics. - Based on the taxonomy, we propose TOXICN, Work Source Scope Size Balance **Toxic** Type Expression Category Implicit Profanity COLD (Deng et al., 2022)Zhihu, Weibo Offensive 37,480 48.1% ! SWSR (Jiang et al., 2022)Weibo Hate speech 8,969 34.5% ! CDial-Bias-Utt (Zhou et al., 2022)Zhihu Hate speech 13,394 18.9% CDial-Bias-Ctx (Zhou et al., 2022)Zhihu Hate speech 15,013 25.9% TOXICN (ours) Zhihu, Tieba Offensive and hate speech 12,011 53.8% ! ! ! Zhang (1994) analyzed Chinese vernacular novels and summarized the common rhetorical methods of offensive language. According to Wang and Liu (2009); Li et al. (2020), insults are more easily expressed through variants. Due to the lack of morphological markers, authors often make sentences based on language sense and consensus (Zhang, 2004), expressing hatred more concisely compared to Indo-European (Zhang, 2005). Zhang (2010); Sohn and Lee (2019) compared insults in English and Chinese and discovered that Chinese has a richer variety of profanity due to its unique culture and linguistics. These linguistic features bring challenges to Chinese toxic detection. Recently, some Chinese toxic language datasets have been constructed (Deng et al., 2022; Jiang et al., 2022; Zhou et al., 2022). However, they fails to separate hate speech from general offensive language, and overlooks toxic samples containing indirect expressions. Besides that, it lacks the construction of the lexicon containing implicit profanities. In this work, we fill these gaps to facilitate fine-grained detection of Chinese toxic language. Here we list Table 2 to compare these studies with our TOXICN. ## 3 Dataset Construction 3.1 Overview In this section, we describe the construction of TOXICN dataset. We first introduce the process of data collection and filtering. Then, MONITOR TOXIC FRAME is presented as the guideline for labeling. After adopting several measures to mitigate biases in the process of labeling, we implement a hierarchical fine-grained annotation based on the frame. The Inter-Annotator Agreement (IAA) of each individual granularity is explored. Finally, ## Related Statistics Of Toxicn Are Shown. 3.2 Data Collection And Filtering To avoid a high degree of homogenization of data, we crawl the published posts from two public online media platforms, *Zhihu* and *Tieba*. Both platforms are representative of local users in China and have active communities about specific topics. Due to the filtering mechanism of the websites, the proportion of posts containing toxic language is relatively sparse (Deng et al., 2022). Thus, we first limit the scope of crawled data under several sensitive topics, including gender, race, *region*, and LGBTQ, which are easily debated on the Internet. And then, we list some keywords for each topic and utilize them to extract a total of 15,442 comments without replies. We remove the samples where the text is too brief to have actual semantics, such as phrases consisting of only inflections and auxiliaries. And dirty data is also deleted, including duplicated samples and irrelevant advertisements. In the end, 12,011 comments are retained2. In the stage of data cleaning, we focus on normalizing the unique form of expression in web text adopted from Ahn et al. (2020). The extra newline characters and spaces in the original text are also deleted. To prevent privacy leakage, we desensitize the data, filtering out *@USERs*, attached links, and pictures. Due to the possibility of carrying important emotional information (Mathew et al., 2021), we reserve emojis for toxic detection. 2We also attempted to crawl posts from *Weibo* referenced (Deng et al., 2022), however, due to the filtering mechanism, there is no guarantee that sufficient samples will be obtained under each topic, such as "*race*" and "*LGBTQ*", resulting in a relatively homogeneous crawl. Therefore, we finally choose Zhihu and *Tieba* as our data sources. ## 3.3 Data Annotation 3.3.1 Monitor Toxic Frame To determine the downstream subtasks and establish the finalized guidelines for labeling, a standardized taxonomy is needed. Here we build a hierarchical taxonomy called MONITOR TOXIC FRAME ![3_image_0.png](3_image_0.png) (in Figure 1). It has three levels including four diverse aspects to identify toxic content and further in-depth analysis. The specific implementations are as follows: Toxic Identification. The first level of our framework is a binary annotation to determine whether the comment contains toxic language, which is the foundation of subsequent labeling. In this work, general offensive language and hate speech are highlighted. Toxic Type Discrimination. The second level is to distinguish general offensive language and hate speech. Based on Waseem and Hovy (2016) and Fortuna and Nunes (2018), we list several criteria for identification of hate speech: 1) attacking specific groups, or 2) inciting others to hate minorities, or 3) creating prejudice, rejection, or disgust for minorities based on stereotypes and distorted facts, or 4) using sarcasm or humor to ridicule groups, despite the publisher may not be malicious. In contrast, general offensive language is not insulting to targets with specific social attributes compared to hate speech (Davidson et al., 2017). Targeted Group and Expression Type Detection. In the third level, we further explore the targeted group and expression type of toxic language. If the content contains hate speech, its characteristics of the target group are specified, including sexism, racism, *regional bias*, and *anti-LGBTQ*. Since multi-class groups may be attacked in a text, this task is categorized as multi-label classification. Meanwhile, we determine the categories of toxic expressions, containing explicitness, *implicitness*, and *reporting*. In these expressions, 1) explicitness is obviously offensive, inciting, or prejudiced against minority groups, and 2) implicitness refers to the subtle or even humor expression without strong exclusion, such as microaggressions and stereotyping (Hartvigsen et al., 2022), and 3) reporting only documents or condemns experience of attack and discrimination against minorities, and the publisher does not express hatred and prejudice (Chiril et al., 2020). As the expression category of general offensive language is necessarily explicit, we focused on the expressions in hate speech. This granularity is set as multi-classification. ## 3.3.2 Mitigating Bias The subjective biases of the annotators negatively impact the quality of the dataset (Waseem and Hovy, 2016). Therefore, it is significant to mitigate these biases during the design and construction of annotations. For this purpose, we adopt the following measures: We first guarantee the diversity of annotators in terms of background information, including gender, age, race, region, and study. All participants major in linguistics and have been systematically trained. The demographics of annotators are shown in Table 3. Then, we make a progressive analysis of the toxic content contained in the crawled posts, and initially determine the labeling rules for various granularities. After a couple of iterations of small-scale annotation tests and discussions of edge cases, the final criteria are established. | Characteristic | Demographics | |------------------|----------------------------| | Gender | 5 male, 4 female | | Age | 5 age < 25, 4 age ≥ 25 | | Race | 6 Asian, 3 others | | Region | From 5 different provinces | | Education | 2 BD, 4 MD, 3 Ph.D. | Table 3: Annotators demographics. | Topic | N-Tox. | Tox. | Toxic Category | Total | Avg. L | | | | | |---------|----------|--------|------------------|---------|----------|-------|-----|--------|-------| | Off. | Hate | H-exp. | H-imp. | H-rep. | | | | | | | Gender | 1,805 | 2,153 | 316 | 1,837 | 1,055 | 693 | 89 | 3,958 | 35.26 | | Race | 1,602 | 2,084 | 229 | 1,855 | 1,041 | 711 | 103 | 3,686 | 36.93 | | Region | 1,222 | 1,148 | 82 | 1,066 | 172 | 292 | 602 | 2,370 | 40.26 | | LGBTQ | 921 | 1,076 | 189 | 887 | 469 | 299 | 119 | 1,997 | 44.96 | | Total | 5,550 | 6,461 | 816 | 5,645 | 2,737 | 1,995 | 913 | 12,011 | 38.37 | ## 3.3.3 Annotation Procedure The annotation procedure consists of two steps: pseudo labeling and main manual annotation. Meanwhile, the initial construction of the insult lexicon is implemented. Pseudo Labelling. To reduce the burden of manual annotation, we retrieve the samples containing insults, most of which are obviously toxic. Specifically, we first build an original lexicon of explicit profanity words, integrating two existing profanity resources, including HateBase3, the world's largest collaborative and regionalized repository of multilingual hate speech, and SexHateLex4(Jiang et al., 2022), a large Chinese sexism lexicon. Then, an iterative approach is employed to match out profanity-laced comments using regular expressions. The swearwords contained in these samples that are not in the lexicon are further collected. We assign a toxic pseudo-label for each sentence containing insults. After several iterations, the remaining samples are directly pseudo-labeled as non-toxic. The statistics illustrate that this method is simple and effective, correctly separating about 50% of toxic samples from ToxiCN. See Table D3 from Appendix D for a more detailed report. Main Manual Annotation. Based on MONI-TOR TOXIC FRAME, we implement the main annotation of TOXICN. Most samples pseudo-labeled as toxic are directly categorized as general offensive language or explicit toxic language. Afterwards, due to the low frequency variants of insults and implicit toxicity expressions, the remaining pseudo-labeled as non-toxic samples have to be re-annotated in a hierarchical manner. Meanwhile, the implicit insults contained in these samples are added to the previous profanity list. We utilize the open source text annotation tool Doccano5to | If Toxic | Toxic Type | Targeted | Expression | |------------|--------------|------------|--------------| | 0.62 | 0.75 | 0.65 | 0.68 | Table 5: Fleiss' Kappa for different granularities. facilitate the labeling process. Each comment is labeled by at least three annotators and a majority vote is then used to determine the final label. After annotation, we explore the Inter-Annotator Agreement (IAA) of TOXICN, and Fleiss' Kappa of each hierarchy is shown as Table 5. ## 3.4 Data Description In the data analysis phase, we first describe the dataset from the topic of comments. The basic statistics of TOXICN are shown in Table 4. We note that there is a sample imbalance between different categories of toxic samples. Specifically, the *Off.* class represents only 6.8% of the overall dataset. Because the data distribution reflects the true situation of the platforms (Mathew et al., 2021), we do not apply additional treatment to the imbalance. In addition, since a single case from a topic may attack multi-class groups, we further record the sample size of hate speech against various target categories. From Table 6, we can see that the distribution of expressions is different for each group. For example, most samples containing regional bias are reporting, which is uncommon in other categories. More statistical details are listed in Appendix D. | Group | H-exp. | H-imp. | H-rep. | Total | |---------|----------|----------|----------|---------| | Sexism | 1,259 | 887 | 156 | 2,302 | | Racism | 1,149 | 660 | 65 | 1,874 | | RGN. B. | 295 | 384 | 610 | 1,289 | | Anti-L+ | 671 | 383 | 121 | 1,075 | Table 6: Sample size of various hate expressions for each attacked group label. *RGN. B.* refers to regional bias and *Anti-L+* is short for anti-LGBTQ. ## 4 Insult Lexicon We divide the insult lexicon built in the process of annotation into five categories according to the attacking object. The lexicon includes sexism, racism, regional bias, anti-LBGTQ, and general swearwords, referring to the swear words that can be used to offend any group. The final dictionary contains 1,032 terms, including not only explicit vulgar words but also implicit insults. In addition, as the Internet generates a wealth of new insults every year, it is vital to explore their origins. Therefore, for the convenience of the follow-up research, we further analyze the rules for the derivation of Internet swear words from the proposed insult lexicon. Specifically, we briefly summarize them in terms of both surface features and their actual meaning. More related terminology notes and examples of profanity are illustrated in Appendix C. Surface Features. To circumvent the filtering mechanism, netizens change the original insults and create new variants with similarities in glyphs and pronunciation (Chen, 2012; Zhang, 2010), which are called *deformation* and *homophonic*, respectively. In addition, Chinese characters are at times replaced with other language codes in some profanities (Li et al., 2020), creating *code-mixing* words or *abbreviations*. Actual Meaning. Internet users often introduce implicit terms to attack or defame the target groups, including usage of *metaphor* and *irony* (Chen, 2012). Besides that, some *borrowed words* also contain specific prejudices, which are used in implicit toxic comments (Shi, 2010). Compared to variants based on surface features, these terms with deep semantics have to be detected with background knowledge. ## 5 Methodology In view of the significance of lexical knowledge to detect toxic language, we propose a migratable benchmark of Toxic Knowledge Enhancement (TKE), incorporating the lexical feature to enrich the original sentence representation. The illustration of TKE is shown in Figure 2. For a given sentence S = {x1, x2*, ..., x*n}, each token xiis embedded as wi ∈ R d, which is a vector representation in d-dimensional space. Inspired by Zhou et al. (2021a), we design a toxic embedding to introduce lexical knowledge. Specifically, we first employ the n-gram to determine whether xi ![5_image_0.png](5_image_0.png) is a subword of an insult, and if so, its attacked group is further indicated. Then, we randomly initialize the group category representation as C = (c0, c1*, ..., c*m), where ci ∈ R d, c0 refers to nontoxic term and m is the number of categories of the insult lexicon. In this work, m = 5. Based on the ci, we further propose the definition of toxic embedding ti of xi: $$t_{i}=\begin{cases}c_{o},&\text{if}x_{i}\text{is non-toxic.}\\ c_{j},&\text{if}x_{i}\text{is from the}j^{t h}\text{category.}\end{cases}\tag{1}$$ Since the element-wise addition of multiple linear vector representations is an efficient way to fully incorporate various information (Mikolov et al., 2013), we utilize this method to integrate toxic and word embedding. The enhanced representation of xiis w′i = wi + λti, where λ ∈ [0, 1] is a weighting coefficient to control the ingestion of toxic knowledge. The ultimate sentence embedding of S is {w′1 , w′2 , ..., w′n}, which is the input of the connected sequence encoder. Due to its convenience, TKE can be migrated to any pre-trained language model (PLM). ## 6 Experiments 6.1 Baselines Here we introduce the baselines of experiments. Several PLMs are utilized as encoders as follows. And we use a fully-connected layer as the classifier for several subtasks. BiLSTM. This method employs the word vector of Tencent AI Lab Embedding6, a static word vector with 200-dimensional features, and integrates contextual information using BiLSTM. We concatenate the last hidden states from both forward 6https://ai.tencent.com/ailab/nlp/zh/embedding.html | Toxic Identification | Toxic Type | Targeted Group | Expression Category | | | | | | | | | | |------------------------|--------------|------------------|-----------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | | | BTC | 64.2 | 53.0 | 45.9 | - | - | - | - | - | - | - | - | - | | BiLSTM | 73.70.6 | 72.70.4 | 72.90.4 | 77.72.2 | 70.41.2 | 73.70.5 | 61.11.2 | 64.40.7 | 62.20.5 | 49.71.6 | 48.61.1 | 48.01.0 | | BiLSTM* | 75.40.5 | 74.80.5 | 74.90.4 | 79.22.0 | 69.61.4 | 73.60.4 | 58.80.9 | 68.61.3 | 62.80.7 | 51.21.9 | 56.81.7 | 53.51.3 | | BERT | 80.00.2 | 79.70.2 | 79.70.2 | 82.80.9 | 73.11.0 | 77.30.4 | 71.10.9 | 71.91.0 | 72.20.4 | 55.91.1 | 56.41.3 | 55.31.1 | | BERT* | 80.40.3 | 80.20.3 | 80.00.3 | 80.21.3 | 75.20.9 | 77.30.2 | 73.31.2 | 72.60.9 | 72.60.4 | 53.52.0 | 60.90.5 | 55.90.9 | | RoBERTa | 80.80.2 | 80.20.3 | 80.30.3 | 80.50.9 | 74.60.5 | 77.30.3 | 71.81.4 | 73.91.1 | 72.60.5 | 54.21.2 | 58.71.1 | 55.80.6 | | RoBERTa* | 80.90.3 | 80.50.3 | 80.60.3 | 79.81.8 | 76.11.2 | 77.70.4 | 72.50.9 | 74.01.0 | 73.00.5 | 54.41.6 | 61.31.2 | 56.80.4 | and backward directions to obtain the final sentence embedding. BERT (Devlin et al., 2019) and **RoBERTa** (Liu et al., 2019). The two most commonly used Chinese transformer-based PLMs, bert-basedchinese7and roberta-base-chinese8, are used as benchmarks. In the experiment, the pooled output of the encoder is utilized as the input of the connected classifier. Besides the above deep learning based methods, we also evaluate the performance of **Baidu Text** Censor9, an online API to identify toxic content. Due to the function limit, we only utilize it in the first subtask of binary toxic identification. ## 6.2 Implementation We employ the widely used metrics of weighted precision (P), recall (R), and F1-score (F1) to evaluate the performance of models. Weighted cross entropy is utilized to address the problem of category imbalances, and the optimizer is AdamW. An early stopping mechanism is applied in the training phase. All the samples in TOXICN are split into a training set and a test set with a ratio of 8:2. We fine-tune the baselines and reserve the best performing models and hyperparameters on the test set, and the same experiments are repeated 5 times by changing the random seeds for error reduction. All experiments are conducted using a GeForce RTX 3090 GPU. More details are shown in Appendix E. ## 6.3 Results And Discussions Tions, Respectively: Rq1: Performance Of Different Subtasks. We evaluate the performance of each baseline and the contribution of TKE at different granularities of toxic language detection. The experimental results are shown in Table 7. From the results, we can observe that: (1) Compared with Baidu Text Censor, deep learning based methods achieved better performance. A plausible explanation is the filtering mechanism of the online API mainly depends on the keyword dictionary. Therefore, it cannot effectively detect the toxicity of sentences containing indirect expressions of hate speech. In addition, the performance of the pre-trained language model based on dynamic word representation (e.g., BERT, RoBERTa) is much better than on static embedding. And in these baselines, RoBERTa is the most effective on several subtasks. (2) Overall, the models introducing TKE have improved the performance of several subtasks, illustrating the effectiveness of representation incorporating toxic lexical knowledge. Among them, TKE leads to the greatest enhancement in the detection of expression category, with an average improvement of 2.7% of each baseline. This result shows that lexical information can improve the ability of the model to detect toxic language in different expressions. Meanwhile, we also find TKE does not bring significant improvement in toxic type discrimination. This is because many insults are widely contained in both general offensive language and hate speech. Therefore, the introduction of toxic embedding does not distinguish well between these two kinds of speech. RQ2: Detection of Each Toxic Subtype. The complementary experiment is conducted to further evaluate the performance of models to identify the ![7_image_0.png](7_image_0.png) toxic content with different expressions. Specifically, we utilize the optimal trained models for the subtask of toxic identification to detect the samples in the test set. After that, we separately analyze the accuracy of sentences labeled as non-toxic and each expression category. The results are shown in Figure 3. Based on the result, it is noteworthy that: (1) Compared to explicit toxic language, the accuracy of implicit expression is significantly lower, with a difference of around 10%. The reason is that many samples containing implicit toxicity are mistakenly classified as non-toxic due to the absence of explicit insults. (2) In spite of more training data, the performance of models to detect implicit toxicity is worse than reporting about 5%. This is because the reporting contains references to actors such as "he/she said..." which support decisions of models. (3) The introduction of TKE increases the accuracy of the model for implicit toxicity and reporting samples. It illustrates that the implicit lexcial knowledge enhances the ability of models to detect toxic samples with indirect expressions. RQ3: Error Analysis. For more insight into the performance of TKE, we perform a manual inspection of the set of samples misclassified by all the models. Two main types of errors are summarized. Here we list the following two samples in the set for illustrative purposes: Exp. 1 北京高考400分上清华 - Toxic (It is sufficient to get into THU with a NEMT result of 400 points in Beijing.) Exp. 2 他以前发帖说过自己是 txl。 - *Non-Toxic* (He has posted before that he is gay.) Type I error refers to sentences annotated as toxic, but classified as *non-toxic* by the models. This kind of error usually occurs in the detection of samples containing implicit bias, caused by a lack of background information on the semantic level. Like Exp. 1, supported by external knowledge, including 400 is a relatively low score on the NEMT with a full score of 750, and THU is one of the best universities in China, it can be known that the publisher uses a fake message to express implicit regional bias against Beijing. Type II error denotes to instances labeled as non-toxic, while detected as *toxic*. The samples with this error usually contain toxic token-level markers, such as swear words and pronouns of minority groups. The training data with these markers is often labeled as toxic language, leading to spurious associations in the models (Zhou et al., 2021b; Ramponi and Tonelli, 2022). Like Exp. 2, where "txl" (meaning "gay") causes models to make decisions based only on statistical experience rather than incorporating context information. From the error analysis, we note that it remains a challenge to integrate richer external knowledge without reducing spurious biases of models. In future work, we will further explore methods of introducing knowledge enhancement for toxic language detection. ## 7 Conclusion And Future Work Due to the rampant dissemination of toxic online language, an effective detection mechanism is essential. In this work, we focus on Chinese toxic language detection at various granularities. We first introduce a hierarchical taxonomy MONITOR TOXIC FRAME to analyze the toxic types and expressions. Based on the taxonomy, we then propose a fine-grained annotated dataset TOXICN, including both direct and indirect toxic samples. Due to the significance of lexical knowledge for the detection of toxic language, we build an insult lexicon and present a benchmark of Toxic Knowledge Enhanced (TKE), enriching the representation of content. The experimental results show the effectiveness of TKE on toxic language detection from different granularities. After an error analysis, we suggest that both knowledge introduction and bias mitigation are essential. We expect our hierarchical taxonomy, resources, benchmarks, and insights to help relevant practitioners in the identification of toxic language. ## Limitations Despite the fact that some measures have been implemented to minimize bias in labeling, we are still explicitly aware that our dataset may contain mislabeled data due to differences in the subjective understanding of toxic language by the annotators. In addition, due to the limitation of data coverage, the samples in our dataset are predominantly in Simplified Chinese, with very few samples in Traditional Chinese, as discussed in Section D.5. Meanwhile, as shown in the error analysis in Section 6.3, our benchmark of Toxic Knowledge Enhanced is not practical for all types of toxic comments, lacks sufficient background knowledge, and can easily lead to spurious associations. For reasons of intellectual property, we only capture the comments rather than the full text, which affects the actual semantics of the sentence to some extent. Besides that, non-textual features are not taken into account in this work, such as images and meta information about publishers. In future work, we will further research span-level and multi-modal toxic language detection. ## Ethics Statement We strictly follow the data use agreements of each public online social platform and double-check to ensure that there is no data relating to user privacy. The opinions and findings contained in the samples of our presented dataset should not be interpreted as representing the views expressed or implied by the authors. We hope that the benefits of our proposed resources outweigh their risks. All resources are for scientific research only. ## Acknowledgment This research is supported by the Natural Science Foundation of China (No. 62076046, 62006034). We would like to thank all reviewers for their constructive comments. ## References Hwijeen Ahn, Jimin Sun, Chan Young Park, and Jungyun Seo. 2020. Nlpdove at semeval-2020 task 12: Improving offensive language detection with cross-lingual transfer. In *Proceedings of the* Fourteenth Workshop on Semantic Evaluation, SemEval@COLING 2020, Barcelona (online), December 12-13, 2020, pages 1576–1586. International Committee for Computational Linguistics. Badr AlKhamissi, Faisal Ladhak, Srinivasan Iyer, Veselin Stoyanov, Zornitsa Kozareva, Xian Li, Pascale Fung, Lambert Mathias, Asli Celikyilmaz, and Mona Diab. 2022. ToKen: Task decomposition and knowledge infusion for few-shot hate speech detection. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 2109–2120, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Rui Cao and Roy Ka-Wei Lee. 2020. Hategan: Adversarial generative-based data augmentation for hate speech detection. In *Proceedings of the 28th International Conference on Computational Linguistics,* COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6327–6338. International Committee on Computational Linguistics. Tommaso Caselli, Valerio Basile, Mitrovic Jelena, Granitzer Michael, et al. 2021. Hatebert: Retraining bert for abusive language detection in english. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17–25. Association for Computational Linguistics. Wangdao Chen. 2012. *Rhetoric introduction*. Fudan University Press. Patricia Chiril, Véronique Moriceau, Farah Benamara, Alda Mari, Gloria Origgi, and Marlène CoulombGully. 2020. An annotated corpus for sexism detection in french tweets. In *Proceedings of The 12th Language Resources and Evaluation Conference, LREC* 2020, Marseille, France, May 11-16, 2020, pages 1397–1403. European Language Resources Association. Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM 2017, Montréal, Québec, Canada, May 15-18, 2017, pages 512–515. AAAI Press. Jiawen Deng, Jingyan Zhou, Hao Sun, Chujie Zheng, Fei Mi, Helen Meng, and Minlie Huang. 2022. COLD: A benchmark for Chinese offensive language detection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11580–11599, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4536– 4545. Association for Computational Linguistics. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2018, New Orleans, LA, USA, February 02-03, 2018, pages 67–73. ACM. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 345–363. Association for Computational Linguistics. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. *Psychological bulletin*, 76(5):378. Paula Fortuna, Mónica Domínguez, Leo Wanner, and Zeerak Talat. 2022. Directions for NLP practices applied to online hate speech detection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11794–11805. Association for Computational Linguistics. Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Comput. Surv., 51(4):85:1–85:30. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In *Proceedings of the Twelfth International* Conference on Web and Social Media, ICWSM 2018, Stanford, California, USA, June 25-28, 2018, pages 491–500. AAAI Press. Laura Hanu and Unitary team. 2020. Detoxify. Github. https://github.com/unitaryai/detoxify. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3309–3326. Association for Computational Linguistics. Md Saroar Jahan and Mourad Oussalah. 2023. A systematic review of hate speech automatic detection using natural language processing. *Neurocomputing*, pages 126–232. Aiqi Jiang, Xiaohan Yang, Yang Liu, and Arkaitz Zubiaga. 2022. SWSR: A chinese dataset and lexicon for online sexism detection. Online Soc. Networks Media, 27:100–182. Bin Li, Yan Dou, Yingting Cui, and Yuqi Sheng. 2020. Swearwords reinterpreted: New variants and uses by young chinese netizens on social media platforms. Pragmatics, 30(3):381–404. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for explainable hate speech detection. In *Thirty-Fifth AAAI* Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14867–14875. AAAI Press. Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In *Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural* Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111–3119. Guanyi Mou, Pengyi Ye, and Kyumin Lee. 2020. SWE2: subword enriched and significant word emphasized framework for hate speech detection. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1145– 1154. ACM. Carla Pérez-Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities. In *Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020,* Barcelona, Spain (Online), December 8-13, 2020, pages 5891–5902. International Committee on Computational Linguistics. Alan Ramponi and Sara Tonelli. 2022. Features or spurious artifacts? data-centric baselines for fair and robust hate speech detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 3027– 3040. Association for Computational Linguistics. Bjorn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, and Michael Wojatzki. 2016. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. In 3rd Workshop on Natural Language Processing for Computer-Mediated Communication/Social Media, pages 6–9. Ruhr-Universitat Bochum. Chunhong Shi. 2010. Web language as a language variety and a linguistic issue. *Applied Linguistics*, (3):70–80. Leandro Araújo Silva, Mainack Mondal, Denzil Correa, Fabrício Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. In Proceedings of the Tenth International Conference on Web and Social Media, Cologne, Germany, May 17-20, 2016, pages 687–690. AAAI Press. Hajung Sohn and Hyunju Lee. 2019. MCBERT4HATE: hate speech detection using multichannel BERT for different languages and translations. In 2019 International Conference on Data Mining Workshops, ICDM Workshops 2019, Beijing, China, November 8-11, 2019, pages 551–559. IEEE. Serra Sinem Tekiroglu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1177–1190. Association for Computational Linguistics. Junjie Wang and Haiyan Liu. 2009. A brief discussion on internet criticism. Journal of Language and Literature Studies, (10):152–153. Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P. Sheth. 2014. Cursing in english on twitter. In Computer Supported Cooperative Work, CSCW '14, Baltimore, MD, USA, February 15-19, 2014, pages 415–425. ACM. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In *Proceedings of the Student* Research Workshop, SRW@HLT-NAACL 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 88–93. The Association for Computational Linguistics. Michael Wiegand, Josef Ruppenhofer, and Elisabeth Eder. 2021. Implicitly abusive language - what does it actually look like and why are we not getting there? In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 576–587. Association for Computational Linguistics. Philine Zeinert, Nanna Inie, and Leon Derczynski. 2021. Annotating online misogyny. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3181–3197. Association for Computational Linguistics. Cuanbiao Zhang. 2004. Different courses of development between chinese and european poetries: A perspective from the nature of chinese characters. *Journal of Pla University of Foreign Languages*, 27(6):77– 82. Cuanbiao Zhang. 2005. A probe into the differences between chinese and western cultures from the perspective of their swearwords. *Training and ResearchJournal of Hubei College of Education*, 22(4):46–49. Tingxing Zhang. 1994. A first look at folk placement honorifics. *Folklore Studies*, (03):30–35. Yimin Zhang. 2010. On the phonetic features of cursing words in chinese and english. *Contemporary* Rhetoric, 26(4):1–13. Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, and Helen Meng. 2022. Towards identifying social bias in dialog systems: Framework, dataset, and benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3576–3591, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xianbing Zhou, Yang Yong, Xiaochao Fan, Ge Ren, Yunfeng Song, Yufeng Diao, Liang Yang, and Hongfei Lin. 2021a. Hate speech detection based on sentiment knowledge sharing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7158–7166. Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Yejin Choi, and Noah A. Smith. 2021b. Challenges in automated debiasing for toxic language detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 3143–3155. Association for Computational Linguistics. ## A Sample We adopt JSON file to store TOXICN dataset, which is a mainstream coding specification to facilitate machine-readable. The construction of data is Sample = (ID, Platform, Topic, Text, Toxic, Hate, [Group], [Expression]), where *Toxic* and *Hate* denote whether the sentence contains toxic language or hate speech, respectively. And if it is not biased, Group and *Expression* are set to empty. Here we provide two samples in Figure A1. Figure A1: Two samples of TOXICN. ![11_image_0.png](11_image_0.png) ## B Details Of Annotation We introduce Fleiss' Kappa (Fleiss, 1971) as the measure of agreement, which works for any number of raters. For the i-th sample, the calculation of Kappa Piis as follows: $$P_{i}={\frac{1}{n(n-1)}}\left[\left(\sum_{j=1}^{k}n_{i,j}^{2}\right)-n\right],$$ , (2) where n and k denote the number of raters and categories, respectively, and ni,j represents the number of raters who assigned the i-th sample to the j-th category. In the process of annotation, if the root category is wrong, labels of its subcategories, which are annotated by minorities, will be discarded. Then, new raters will be introduced to label this sample. And the Kappa of this sample is recalculated. Based on the IAA shown in Table 5, the most disagreement stems from the phase of "Toxic Identification" and the Kappa is 0.62, caused by implicit hate speech (*H-imp*) containing language techniques like humor. Although we regard these samples as toxic in the rules, some annotators believe that, due to subjective reasons, their toxicity intensity is insufficient to classify them as toxic. And in the "Targeted Group" with a Kappa of 0.65, "*sexism*" and "*anti-LGBTQ*" can also easily be confused. ## C Derivative Rules Of Insults In this section, we further explain the term in Section 4 and list several insults shown in Table C1, presenting their morphologies to analyze the literal and flexible derived meanings. Deformation. Since Chinese characters are pictographs, they will be given meanings containing specific emotions by separating and combining with individual characters (Chen, 2012). An example is "默" (meaning "*silence*"), whose glyph consists of "黑" (meaning "*black*") and "犬" (meaning "dog"), implicitly expressing the distaste for the black community. Homophonic. Like English, a new word with a similar pronunciation can be substituted for the original word, resulting in the creation of a different semantics (Zhang, 2010). For instance, netizens always substitute "满" (meaning "*Manchu*") for "蛮" (meaning "*barbarians*"), both of which are pronounced similarly to "man". Irony. Positive words is sometimes ironically used to achieve the effect of insults, which is often reflected in old words with new meanings (Fortuna and Nunes, 2018). Like "仙女" (meaning "*fairy*"), what was originally a gentle and kind image is implied to be a rude and impolite "*shrew*". Abbreviation. Shortening and contracting sensitive words will make expressions more concise and clear (Chen, 2012). An example is "txl", where each letter is the pronounced initials of "同", "性", and "恋", respectively, meaning "gay". Metaphor. Internet users often degrade their attacking targets into something sarcasm, such as animals, in order to insult them (Zhang, 2010). In the term "蠢驴" (meaning "*silly donkey*"), the publisher compares men to donkeys, transmitting aggression against others. Code Mixing. To emphasize the tone, nonChinese language codes are widely mixed in the $$\left(2\right)$$ | Term | Literal Meaning | Composition | Actual Meaning | Category | |------------------------------|-------------------|----------------------------------|------------------|------------| | 默(mò) | silence | 黑(hei) ¯ 犬(quan) ˇ → black dog | n*gger | racial | | 南(nán) 满(man) | South Manchu | | | | | ˇ | 南满 → 南蛮(mán) | southern barbarians | regional | | | 蠢驴 | silly donkey | - | foolish people | general | | txl | txl | txl → 同(tóng) 性(xìng) 恋(liàn) | gay | anti-L+ | | ni哥(ge) | ni brother | ni+ger | | | | ¯ | → n*gger | n*gger | racial | | | 小(xiao) ˇ 仙(xian) ¯ 女(nu) | fairy | - | shrew | sexual | | ˇ | | | | | | 凯(kai) ˇ 勒(lè) 奇(qí) | Kalergi | - | Kalergi Plan | racial | Table C1: Example illustration of Chinese insults. Among them, *Composition* means the structure and formation of these words, while it is the lexical foundation to derive the *Actual Meaning*. | Topic | Keywords 性别歧视, 性别偏见, 男权主义, 女权主义, 性别对立, 父权, 家庭主妇 | |---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Gender | Sexism, Gender Bias, Masculinity, Feminism, Gender Dichotomy, Patriarchy, Housewife 种族歧视, 人种, 黑种人, 白种人, 混血儿, 少数民族, 血统, 肤色, 亚裔 | | Race | Racism, Ethnic, Black Race, White Race, Mixed-Blood, Ethnic Minority, Bloodline, Skin Color, Asian 地域歧视, 非洲, 东南亚, 上海, 北京, 广州, 南方人, 北方人 | | Region | Regional Discrimination, Africa, Southeast Asia, Shanghai, Beijing, Guangzhou, Northerners, Southerners 反同性恋, 异性恋, 同性恋, 女同性恋, 男同性恋, 双性向者, 跨性别者, 酷儿, 性取向 | | LGBTQ | Anti-LGBTQ, Heterosexuality, Homosexuality, Lesbian, Gay, Bisexual, Transgender, Queer, Sexual Orientation | text on the Chinese web platforms, such as English and emoji (Li et al., 2020). Like profanity "ni哥" (meaning "*ni brother*"), which has the same pronunciation as "*n*gger*". Borrowed Word. Certain toxic cultural connotations pervade some phonetic foreign words (Shi, 2010). Therefore, background information is required to clarify the actual semantics of these terms. An example is "凯勒奇", a reference to the anti-Semitic *Kalergi Program*, which is used as an inflammatory term. ## D Supplement Of Data Description D.1 Data Source Information According to their monthly financial report1011, Zhihu and *Tieba* have approximately 100 million and 40 million monthly active users, respectively. The samples in the dataset come from 5,385 users. And the samples are extracted from June 2021 to November 2022. ## D.2 Keywords Of Platforms Table D1 lists the keywords of the crawled posts for diverse topics. In the process of data annotation, 10https://ir.zhihu.com/Quarterly-Results 11https://ir.baidu.com/financial-reports we note that the distribution of toxic subtype has a significant difference in the samples from the two platforms, as shown in Table D2. From the statistics, we can see that more than half of the toxic samples on *Zhihu* have subtle expressions, including implicit hate and reporting. In contrast, users from *Tieba* utilize more direct attacks to insult others, containing general offensive language and explicit hate speech. This inspires us to adapt the methodology appropriately to detect toxic language from different platforms in future work. ## D.3 Samples W Or W/O **Insults** In this section, we statistic the samples containing insults to further demonstrate the effectiveness of the two-step annotation procedure. To restore the process, we count the samples based on the profanity list at the end of the pseudo-annotation phase and the final insult lexicon, respectively. The result is shown in Table D3. In the first stage, 3,474 samples are pseudolabeled as toxic, and 91.1% of them are indeed toxic, representing 49% of all toxic samples. This reflects the fact that pseudo-annotation can significantly reduce the annotation burden. Afterwards, some low-frequency swear words and implicit in- | Platform | N-Tox. | Tox. | Toxic Category | Total | Avg. L | | | | | |------------|----------|--------|------------------|---------|----------|-------|-----|--------|-------| | Off. | Hate | H-exp. | H-imp. | H-rep. | | | | | | | Zhihu | 3,094 | 3,187 | 270 | 2,917 | 1,088 | 1,055 | 774 | 6,281 | 41.75 | | Tieba | 2,456 | 3,274 | 546 | 2,728 | 1,649 | 940 | 139 | 5,730 | 34.99 | | Total | 5,550 | 6,461 | 816 | 5,645 | 2,737 | 1,995 | 913 | 12,011 | 38.37 | | Lexicon | Label | w/ Insult | w/o Insult | |------------|---------|-------------|--------------| | Tox. | 3,166 | 3,295 | | | Pseudo | N-Tox. | 308 | 5,242 | | Total | 3,474 | 8,537 | | | Tox. | 4,331 | 2,130 | | | Annotation | N-Tox. | 1,162 | 4,388 | | Total | 5,439 | 6,518 | | Table D2: Statistics of different platforms in ToxiCN. Table D3: Statistics of samples with ("w/ ") or without ("w/o") any insults, where "*Pseudo*" and "*Annotation*" means the insults are from the profanity list in the pseudo labeling and the final lexicon respectively. | Num of Group Labels (n) | Size | % | |---------------------------|--------|-------| | 1 | 4,802 | 85.07 | | 2 | 788 | 13.96 | | ≥ 3 | 54 | 0.96 | Table D4: Statistics of hate speech against single and multi-class attacked groups. *Size* denotes the number of samples attacking n groups, and % is the percentage of matched samples of the total number of hate speech. sults are added to the lexicon during the main manual labeling. 1,162 instances with insults are ultimately labeled as non-toxic and 2,130 without insults are toxic, showing the necessity of manual inspection. These samples are more difficult to be identified than the cases that can be filtered directly using an insult lexicon, and need to be focused on in future work. However, even so, the comments containing insults are more likely to be toxic, illustrating the significance of lexical knowledge for toxic language detection. ## D.4 Samples Of Attack Multi-Class Group Here we calculate the proportion of utterances attacking multi-class groups. As the result shown in Table D4, there are about 15% of the samples containing attacks and discrimination against multiple groups in TOXICN. ## D.5 Statistics Of Sub-Varieties Of Chinese Simplified Chinese accounts for 99.7% of the crawled data on both platforms. This reflects the reality of Chinese web platforms because Simplified Chinese is the official language of China. And we do not do additional sampling for data imbalance. ## D.6 Samples Of Attacks On Disabilities We note in the study of other languages, such as English, "disability" has received much attention (Davidson et al., 2017; Founta et al., 2018; PérezAlmendros et al., 2020). However, Chinese online platforms themselves can filter out posts attacking disabled people. Therefore, few comments can be retrieved and collected. This is why existing Chinese datasets don't contain samples attacking "disability" (Deng et al., 2022; Jiang et al., 2022; Zhou et al., 2022). Meanwhile, we found some toxic samples contain "disability" related slurs, like "foolish", used to attack others. Whether these samples are attacks against "disability" is worth discussing. In addition, it also reflects that the taxonomy (MONITOR TOXIC FRAME), and thus all the current resources, are dependent on the filtering system of the online platforms from which the data were crawled. In the future, we will use natural language generation techniques and human adversarial attacks to complement Chinese toxic language that attacks specific groups, referenced by (Hartvigsen et al., 2022; Dinan et al., 2019). ## E Experimental Details The details of the hyperparameters are listed in Table E1. During the phase of the experiment, we note that the hyperparameters of BERT and RoBERTa with the best performance are basically the same. | Hyperparameters | BiLSTM | BERT/RoBERTa | |-------------------|----------|----------------| | epochs | 20 | 5 | | batch size | 64 | 32 | | learning rate | 1e-3 | 1e-5 | | padding size | 100 | 80 | | dropout rate | 0.5 | 0.5 | | λ | 0.5 | 0.01 | Table E1: The hyperparameters of the experiment. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? In the section on "limitations". ✓ A2. Did you discuss any potential risks of your work? In the section on "limitations". ✓ A3. Do the abstract and introduction summarize the paper's main claims? In the section on "abstract" and "introduction". ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** In All The Sections. ✓ B1. Did you cite the creators of artifacts you used? In all the sections. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? In the section on "Ethics Statement" ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? In the section on "Ethics Statement" ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? In the section on "3.2 Data Collection and Filtering", "Ethics Statement" ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? In the section on "3.2 Data Collection and Filtering". ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. In the section on "3.4 Data Description", "6.2 Implementation" and "Appendix B". ## C ✓ **Did You Run Computational Experiments?** In The Section On "6 Experiments" ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? In the section on "6.2 Implementation" and "Appendix D". The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In the section on "6.2 Implementation". ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In the section on "6 Experiments" ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In the section on "3.3.3 Annotation Procedure", "6 Experiments" ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** In The Section On "3.3.2 Mitigating Bias" ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? In the section on "Ethics Statement" ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? In the section on "3.3.2 Mitigating Bias" ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? In the section on "Ethics Statement" ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? In the section on "Ethics Statement" ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? In the section on "3.3.2 Mitigating Bias"
duquenne-etal-2023-speechmatrix
{S}peech{M}atrix: A Large-Scale Mined Corpus of Multilingual Speech-to-Speech Translations
https://aclanthology.org/2023.acl-long.899
We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. The mined data and models will be publicly released
# Speechmatrix: A Large-Scale Mined Corpus Of Multilingual Speech-To-Speech Translations Paul-Ambroise Duquenne1,2,∗and **Hongyu Gong**1,∗ Ning Dong1and **Jingfei Du**1 Ann Lee1and **Vedanuj Goswami**1and **Changhan Wang**1 Juan Pino1and **Benoît Sagot**2and **Holger Schwenk**1 1 Meta AI Research, 2Inria padqn,hygong,dnn,jingfeidu,schwenk@meta.com ## Abstract We present SpeechMatrix, a large-scale multilingual corpus of speech-to-speech translations (S2ST) mined from real speech of European Parliament recordings. It contains speech alignments in 136 language pairs with a total of 418 thousand hours of speech. To evaluate the quality of this parallel speech, we train bilingual speech-to-speech translation models on mined data only and establish extensive baseline results on Europarl-ST, VoxPopuli and FLEURS test sets. Enabled by the multilinguality of SpeechMatrix, we also explore multilingual speech-to-speech translation, a topic which was addressed by few other works. We also demonstrate that model pre-training and sparse scaling using Mixture-of-Experts bring large gains to translation performance. We are open-sourcing the mined data, speech encoders used for mining, multilingual HuBERT models in four language families for target unit generation, language-specific vocoders for speech synthesis from discrete units, and S2S models trained and presented in this work.1 ## 1 Introduction Research has progressed in the area of speech-tospeech translation with the goal of seamless communication among people who speak different languages. Direct S2ST models attract increasing research interest, e.g. (Jia et al., 2019). Compared to conventional cascaded models, direct models do not rely on intermediate text representations which make them applicable to the translation of languages without a well-defined writing script. Moreover, direct S2ST have the advantage of higher training and inference efficiency (Lee et al., 2022a). Despite the benefits of direct approaches, their training is faced with the major issue of data scarcity in parallel speech. Human labeled speech ∗Equal contributions 1https://github.com/facebookresearch/ fairseq/tree/ust/examples/speech_matrix data is expensive to create, there are very few data resources providing speech alignments, and the data amount is quite limited. To mitigate the data scarcity, some works have leveraged multitask learning (Jia et al., 2019; Lee et al., 2022a), data augmentation with speech variations (Jia et al., 2019), or with synthesized speech (Jia et al., 2022a; Popuri et al., 2022; Nguyen et al., 2022). It is also shown useful to leverage knowledge transferred from pre-trained models (Lee et al., 2022b; Popuri et al., 2022) such as HuBERT (Hsu et al., 2021), wav2vec 2.0 (Baevski et al., 2020) and mBART (Liu et al., 2020). Recently, Duquenne et al. (2021) is the first work to make speech mining efforts by learning a shared multilingual speech and text embedding space. Speech content is encoded by speech encoders into fixed-size representations which are then used for aligning speech and text across different languages. It demonstrates good empirical gains to train direct speech-to-text and speech-tospeech translation systems with the mined data (Duquenne et al., 2021; Lee et al., 2022b). In this work, we trained speech encoders for 17 languages2and mined speech-to-speech alignments for all possible language pairs from VoxPopuli (Wang et al., 2021a), a collection of European Parliament recordings. To the best of our knowledge, SpeechMatrix is by far the largest freely available speech-to-speech translation corpus, with 136 language directions and an average of 1,537 hours of source speech in each direction for a total of 418 thousand hours. We demonstrate that strong S2ST models can be trained with these mined data and validate the good quality of the speech alignments across languages. We are open-sourcing the mined data and the speech encoders used for 2Czech (cs), German (de), English (en), Spanish (es), Estonian (et), Finnish (fi), French (fr), Croatian (hr), Hungarian (hu), Italian (it), Lithuanian (lt), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk) and Slovenian (sl). mining, which could pave the way for future research on S2ST. Moreover, for reproducibility, we will release model components including multilingual HuBERT models in four language families for target unit generation, language-specific vocoders for speech synthesis from discrete units, and S2S models trained and presented in this work. ## 2 Related Works From bitext mining to speech mining. Bitext mining is to find parallel sentences from monolingual resources, which provides a large amount of training data for machine translation models. Early works on bitext mining used document metainformation (Resnik, 1999), cross-lingual document retrieval (Munteanu and Marcu, 2005) or information retrieval (Abdul-Rauf and Schwenk, 2009; Bouamor and Sajjad, 2018). More recent works use multilingual sentence embeddings (Artetxe and Schwenk, 2018; Yang et al., 2019; Schwenk et al., 2021a). The embedding based approach can be extended to new languages (Reimers and Gurevych, 2020; Heffernan et al., 2022) or the speech modality (Duquenne et al., 2021; Khurana et al., 2022) with knowledge distillation, also called teacher-student approach. These multilingual and multimodal sentence embeddings enabled us to perform large-scale speech-text mining, or speech-speech mining for a small set of languages. Speech-to-speech translation (S2ST). S2ST started from cascaded systems consisting of automatic speech recognition (ASR), machine translation (MT) and text-to-speech synthesis (TTS) (Nakamura et al., 2006; Do et al., 2015). The reliance on intermediate text outputs poses limitations on cascaded models to support efficient inference and unwritten languages. Given these challenges, there has been a recent surge of research interest in direct approaches to speech translation without the need of texts. Translatotron (Jia et al., 2019) and Translatotron2 (Jia et al., 2022b) train end-to-end S2ST to generate target spectrograms with multitask learning. Another line of research replaces the target spectrograms in S2ST modeling with discrete units which are learned from a large amount of unlabeled speech (Lee et al., 2022a,b). Discrete units have shown to better capture linguistic content than spectrograms. Despite these progress on direct S2ST, it is faced with the challenge of limited parallel speech. Speech translation corpora. The Fisher dataset, a collection of approximately 170 hours of telephone conversations in Spanish (Post et al., 2014), is commonly used as training data for SpanishEnglish S2ST. However, it does not provide parallel English speech. Previous works generate synthesized English speech from English text translations provided by Fisher. Another S2S dataset containing synthesized speech is CVSS, which covers parallel S2ST translations from 21 languages into English. It is derived from Common Voice (Ardila et al., 2020) and CoVoST 2 (Wang et al., 2021b), and it synthesizes speech from translated texts. The release of VoxPopuli dataset provides the largest S2S translations in real speech so far (Wang et al., 2021a). It covers pairwise speech-to-speech translations among 15 languages, and each direction has less than 500 hours of speech. In another initiative named FLEURS, the text-to-text evaluation data of the FLoRes-101 benchmark (Goyal et al., 2022) was extended to the speech modality. Supporting 102 languages, FLEURS has a larger language coverage than VoxPopuli, but it only contains around 12 hours of speech per language and it is intended to be used asN-way parallel test data. In this work, we present SpeechMatrix, a largescale multilingual speech-to-speech corpus mined from VoxPopuli (Wang et al., 2021a). It contains speech alignments in 136 language pairs with an average of 1, 537-hour source speech per direction. The main characteristics of these speech corpora are summarized in Table 1. ## 3 Speech-To-Speech Mining The mining approach of this work is built upon the idea of encoding multilingual speech utterances into a shared embedding space. Speech encoders project utterances with similar semantic content to fixed-size representations which are close in the embedding space regardless of their languages. The closeness of embeddings reflects the similarity of speech content, and is used as the alignment score in the mining process. In this section, we discuss speech encoders and speech mining. ## 3.1 Speech Encoders We followed the teacher-student approach introduced in (Duquenne et al., 2021) and trained speech encoders with the supervision of the multilingual LASER text encoder (Schwenk et al., 2021b). Transcriptions or written translation of the audio utterances are encoded with LASER text encoder as | Dataset | # of Languages | Avg. duration (h) | Source speech | Target speech | |--------------------------------|------------------|---------------------|----------------------------|-----------------------------| | Fisher (Post et al., 2014) | 2 | 127 | Telephone conversation | Synthetic | | MaSS (Boito et al., 2020) | 8 | 20 | Bible reading | Bible reading | | VoxPopuli (Wang et al., 2021a) | 15 | 82 | European Parliament speech | Simultaneous interpretation | | CVSS (C+T) (Jia et al., 2022c) | 21 | 181 | Read | Synthetic | | FLEURS (Conneau et al., 2022) | 102 | 12 | Read | Read | | SpeechMatrix (ours) | 17 | 1537 | European Parliament speech | European Parliament speech | target vectors for speech encoder training. During training, we minimize the cosine loss between fixed-size representations output by speech encoders, and the outputs of LASER text encoder (whose weights are frozen during training). Speech encoders are initialized with the 2B-parameter XLS-R model (Babu et al., 2021), which was pre-trained on nearly half a million hours of publicly available audios in 128 languages. Following (Duquenne et al., 2022), the fixed-size representation for speech is obtained with max pooling of the encoder outputs which appeared to work better compared to other pooling methods. We summarize the architecture of the speech encoder in Figure 1. We used various publicly available ASR data sets which cover our languages to train the speech encoders, including CoVoST 2 (Wang et al., 2020, 2021b), Common Voice (Ardila et al., 2020), Europarl (Ardila et al., 2020), mTedx (Salesky et al., 2021), Must-C (Di Gangi et al., 2019) and VoxPopuli (Wang et al., 2021a), as well as speech translation data from the foreign languages into English and from English into German. We removed training samples whose transcription or the written translation consisted of multiple sentences, as LASER has been trained on single sentences only. For better training efficiency, we trained speech encoders for each language family instead of each language. The language grouping is provided in Appendix. To better handle imbalanced training ![2_image_0.png](2_image_0.png) data, we sample the training data from different languages with the same approach as (Duquenne et al., 2021). For English (en), Slovenian (sl), Lithuanian (lt) and Dutch (nl), we also trained separate monolingual speech encoders that had lower valid cosine loss compared to multilingual encoders, and these four monolingual encoders were used for mining. ## 3.2 Evaluation Of Speech Encoders Similarity search is frequently used to evaluate multilingual text encoders (Artetxe and Schwenk, 2018; Feng et al., 2020; Heffernan et al., 2022). We use the following score to measure similarity between the source audio, and the target transcriptions or translations: sim(x,y) (1) $$=\cos(\text{x},\text{y})-\left(\sum_{z\in N\!N_{k}(x)}\frac{cos(x,z)}{2k}+\sum_{z\in N\!N_{k}(y)}\frac{cos(y,z)}{2k}\right)$$ where x and y are the source and target embeddings, and NNk(x) denotes the k nearest neighbors of x. We used k = 4. We evaluated similarity search of audios against transcriptions on VoxPopuli ASR test set in Table 2, which is our target domain as we plan to mine unlabeled speech from VoxPopuli (see subsection 3.3). We also evaluated similarity search of audio against written translations or transcriptions on CoVoST 2 test set in order to compare with speech encoders in previous work (see detailed analysis in Appendix A). Finally, we report text-to-text similarity search using the LASER text encoder as lower bound for the speech translation similarity search error rate since we use gold transcriptions to search against written translations. We report error rates (in %) that are percentage of audio utterances incorrectly matched with text transcripts from the same test set. We note that error rates are very low for all languages (below 5% and around 1 or 2% for most languages), which is an initial validation of good-quality speech encoders before the large-scale mining. | Sim Search | cs | de | en | es | et | fi | fr | hr | hu | it | lt | nl | pl | pt | ro | sk | sl | |--------------------------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------| | # test sentences | 1k | 1.7k | 1.5k | 1.4k | 47 | 0.4k | 1.5k | 0.3k | 1k | 1k | 39 | 1k | 1.6k | - | 1.3k | 0.6k | 0.3k | | Audio vs. transcriptions | 0.6 | 1.0 | 0.2 | 0.7 | 0.0 | 0.7 | 0.5 | 0.3 | 1.1 | 4.9 | 0.0 | 0.8 | 0.9 | - | 0.9 | 0.7 | 3.1 | Table 2: Similarity search error rates (in %) on VoxPopuli ASR test set. ## 3.3 Large-Scale Speech Mining We used VoxPopuli as our source of unlabeled unsegmented speech for 17 languages in focus. In principle, performing speech-to-speech or speechto-text mining can be done with exactly the same pipeline as text-to-text mining but with different encoders. We follow the global mining approach as described in Schwenk et al. (2021a) and compare all segments in the source language with all segments in the target language. Similarity scores are calculated in both directions using the margin as described in Equation 1 considering k = 16 neighbors. Segments are considered to be parallel if the margin score exceeds a threshold, we use 1.06 if not specified otherwise. The reader is referred to Schwenk et al. (2021a) for a detailed description of the generic mining pipeline. There is however one important difference when processing speech: it is not straightforward to segment the audio signal into parts which have the optimal granularity for mining. The VoxPopuli recordings have a rather long duration, e.g. one hour and a half on average for English. We apply Voice Activity Detection (VAD) using Silero-VAD (Silero-Team, 2021) which supports over 100 languages. The resulting segments do not necessarily correspond to complete sentences. On one hand, there may be silence in the middle of an utterance, e.g. a hesitation. On the other hand, two sentences may follow each other without a long silence separating them. We follow the "over segmentation" approach outlined in Duquenne et al. (2021): several possible segments are created and we let the mining algorithm decide which ones match the best. Initial experiments suggest that segments shorter than 1 second or longer than 20 seconds are unlikely to be aligned and therefore were excluded. After mining, the resulting speech alignments may have overlap as we over-segment the unlabeled speech. A post-processing method Duquenne et al. (2021) is introduced to remove overlaps between mined speech segments on the source speech side. We relax the post-processing of the mined data, allowing for some overlap between mined speech segments: for two audio segments that overlap on the source side, if the overlap represents more than 20% of the first segment and of the second segment, we discard the alignment with the lowest mining score. We did an ablation study on different thresholds of overlap ratio for one low-resource, one mid-resource and one high-resource direction and found that 20% was the best overlap threshold in all settings. We report the statistics of the mined speech-tospeech translation pairs in Table 3, with a mining score threshold of 1.06. The mined data totals 418k hours of parallel speech with an average of 1,537 hours of source speech in all translation directions. While some high resource languages like English (en), Spanish (es) or French (fr) can reach up to 5k hours of aligned speech with other spoken languages; lower resource languages such as Estonian (et) and Lithuanian (lt) obtain much fewer alignments, with only a few hours of aligned speech for Lithuanian. We also performed mining of the source speech in sixteen languages against more than twenty billion English sentences from Common Crawl. This yielded speech-text alignments between 827 and 3, 966 hours (c.f. the last column of Table 3). Training and evaluation of speech-totext translation are left for future research. ## 3.4 Evaluation Data Besides the speech-to-speech data mined as the train set, we leverage labeled public speech datasets as the evaluation sets. Test set. In our experiments, we derive test sets in speech translation from three public corpora, evaluating translation models trained on mined data across different domains. (1) Europarl-ST (EPST) (Iranzo-Sánchez et al., 2020). It is a multilingual speech-to-text translation corpus built on recordings of debates from the European Parliament, containing 72 translation directions in 9 languages.3 (2) VoxPopuli (Wang et al., 2021a). S2S data, as part of VoxPopuli release, provides aligned source and target speech together with source transcriptions. We prepare the speech-to-text data with 3en, fr, de, it, es, pt, pl, ro and nl | Speech targets | Text | | | | | | | | | | | | | | | | | | |-----------------------------------------------|--------|-------|-------|-------|------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------|------|-----|------| | Src/Tgt | cs | de | en | es | et | fi | fr | hr | hu | it | lt | nl | pl | pt | ro | sk | sl | en | | cs | - | 2381 | 3208 | 2290 | 952 | 1312 | 2476 | 726 | 1396 | 2410 | 84 | 2377 | 2516 | 1867 | 1190 | 2146 | 452 | 2528 | | de | 2386 | - | 4734 | 3113 | 901 | 1477 | 3536 | 498 | 1871 | 3476 | 41 | 3384 | 2632 | 2250 | 1281 | 1646 | 361 | 3073 | | en | 3172 | 4676 | - | 4715 | 1585 | 2169 | 5178 | 824 | 2266 | 4897 | 82 | 4422 | 3583 | 3572 | 2258 | 2306 | 586 | - | | es | 2240 | 3041 | 4708 | - | 862 | 1373 | 4446 | 528 | 1599 | 4418 | 47 | 3067 | 2646 | 3484 | 1857 | 1603 | 308 | 3966 | | et | 943 | 892 | 1593 | 877 | - | 1201 | 934 | 265 | 1119 | 1019 | 39 | 1055 | 949 | 721 | 419 | 780 | 196 | 1578 | | fi | 1296 | 1463 | 2180 | 1393 | 1197 | - | 1449 | 306 | 1473 | 1599 | 47 | 1654 | 1350 | 1128 | 621 | 977 | 260 | 1969 | | fr | 2424 | 3457 | 5171 | 4455 | 923 | 1435 | - | 560 | 1711 | 4618 | 50 | 3273 | 2822 | 3384 | 1991 | 1657 | 326 | 3966 | | hr | 736 | 507 | 854 | 553 | 273 | 317 | 588 | - | 328 | 615 | 24 | 546 | 660 | 433 | 277 | 586 | 136 | 1311 | | hu | 1417 | 1897 | 2346 | 1672 | 1140 | 1507 | 1787 | 328 | - | 1855 | 68 | 1839 | 1566 | 1315 | 808 | 1064 | 311 | 2301 | | it | 2404 | 3460 | 4948 | 4500 | 1028 | 1614 | 4700 | 607 | 1823 | - | 103 | 3414 | 2848 | 3421 | 1995 | 1656 | 474 | 2891 | | lt | 78 | 38 | 79 | 46 | 37 | 44 | 48 | 21 | 61 | 95 | - | 77 | 80 | 35 | 18 | 64 | 6 | 827 | | nl | 2322 | 3305 | 4396 | 3066 | 1040 | 1633 | 3269 | 521 | 1768 | 3355 | 80 | - | 2459 | 2399 | 1352 | 1646 | 458 | 2708 | | pl | 2530 | 2646 | 3662 | 2735 | 967 | 1378 | 2913 | 656 | 1554 | 2883 | 88 | 2540 | - | 2121 | 1301 | 1892 | 431 | 2871 | | pt | 1849 | 2224 | 3606 | 3525 | 722 | 1131 | 3421 | 421 | 1279 | 3403 | 37 | 2436 | 2087 | - | 1579 | 1358 | 247 | 3540 | | ro | 1187 | 1275 | 2290 | 1894 | 423 | 627 | 2024 | 271 | 789 | 1996 | 19 | 1384 | 1288 | 1592 | - | 870 | 125 | 2784 | | sk | 2127 | 1628 | 2329 | 1631 | 781 | 982 | 1685 | 574 | 1038 | 1650 | 69 | 1676 | 1869 | 1361 | 867 | - | 370 | 2090 | | sl | 436 | 350 | 579 | 307 | 192 | 254 | 324 | 128 | 295 | 461 | 6 | 454 | 413 | 241 | 121 | 359 | - | 1267 | | # hours of unlabeled speech 18.7k 23.2k 24.1k | 21.4k | 10.6k | 14.2k | 22.8k | 8.1k | 17.7k | 21.9k | 14.4k | 19.0k | 21.2k | 17.5k | 17.9k | 12.1k | 11.3k | | | | | target speech and source transcription as our test set. To ensure that there is no overlap between the mined data and VoxPopuli test sets, we need to remove speech from mined alignments which are from the same session as test samples. In order to keep as much mined data as possible, we use VoxPopuli test set only when a language direction is not covered by EPST considering their domain similarity. Moreover, similarity scores are provided to indicate the quality of VoxPopuli samples. To choose high-quality data, we sort all sessions in the VoxPopuli S2S data in a decreasing order of the average similarity score of their samples. We keep adding samples from highly ranked sessions to the test set until the test size reaches 1000. (3) FLEURS (Conneau et al., 2022). Built upon N-way text translations from FLoRes (Goyal et al., 2022), FLEURS provides speech for aligned texts and creates speech-to-speech data covering all mined directions. We take its source speech and target texts as the test data. In the case where multiple utterances correspond to one piece of source text, we generate one test pair for each source utterance respectively. FLEURS texts are from English Wikipedia, which is a different domain from VoxPopuli and EPST. Valid set. Valid sets are prepared for S2S modeling using VoxPopuli and FLEURS data in a similar way as test sets. For VoxPopuli, we extract a valid set of about 1000 samples by adding data from highly scored sessions which are not in the test set. FLEURS valid set is derived from its valid samples. We prepare speech-to-unit data from these selected valid samples by transforming the target speech into target units for speech-to-unit modeling, which will be discussed in section 4. ## 4 Experiments & Results To evaluate the quality of the mined data, we trained S2ST models on SpeechMatrix data and report the translation performance. We hope that these results will serve as baselines for future studies in speech-to-speech translation. ![4_image_0.png](4_image_0.png) | Train set | Es-En | Fr-En | En-Es | En-Fr | | |-------------------------|---------|---------|---------|---------|------| | VoxPopuli | Hours | 532 | 523 | 415 | 451 | | S2S | BLEU | 13.1 | 15.4 | 16.4 | 15.8 | | Hours | 1,353 | 1,507 | 1,366 | 1,518 | | | SpeechMatrix (t = 1.09) | BLEU | 20.4 | 20.7 | 21.9 | 19.3 | Table 4: BLEU scores on EPST test sets by S2ST models with different training data. cs de en es et fi fr hr hu it lt nl pl pt ro sk sl cs - 12.9 22.7 16.7 - 0.6 21.1 4.4 0.5 10.2 0.1 6.1 8.5 - 4.3 16.9 3.0 de 7.3 - 16.3 11.7 - 1.2 10.7 4.5 0.6 3.8 0.1 10.4 3.5 7.1 5.2 3.0 4.1 en 8.2 10.1 - 21.9 - 1.9 19.2 8.4 1.1 11.5 0.3 15.1 8.2 11.8 7.6 5.7 5.5 es 5.2 6.1 20.4 - - 1.3 16.3 3.6 0.7 11.1 0.1 8.0 3.9 13.3 5.2 2.2 2.2 et - - - - - - - - - - - - - - - - - fi 3.0 9.0 19.7 11.4 - - 14.1 1.5 0.0 5.8 0.1 6.6 4.5 - 4.4 1.7 1.6 fr 5.4 6.3 20.7 18.4 - 0.8 - 5.4 0.7 10.2 0.1 8.4 4.8 13.4 5.6 1.6 1.5 hr - - - - - - - - - - - - - - - - - hu 2.6 7.3 15.3 9.5 - 0.7 13.8 1.9 - 6.3 0.1 3.0 1.6 - 2.4 0.9 1.2 it 6.4 4.9 18.9 19.6 - 0.4 15.3 5.2 0.7 - 0.1 6.5 3.6 12.4 3.7 2.1 2.8 lt 0.2 0.0 3.1 0.8 - 0.0 0.7 0.1 0.0 0.6 - 0.7 0.1 - 0.0 0.0 0.1 nl 3.5 8.1 18.0 13.2 - 0.5 13.0 3.3 0.4 5.2 0.1 - 3.4 6.7 4.1 1.7 2.1 pl 7.2 2.8 4.9 6.3 - 1.0 5.5 4.5 0.5 5.8 0.2 1.6 - 6.1 3.2 4.7 2.4 pt - 4.7 21.2 23.2 - - 18.1 - - 4.4 - 5.0 3.6 - 4.4 - - ro 4.6 6.5 22.6 20.1 - 0.8 18.6 2.4 0.4 8.7 0.1 3.5 4.6 10.3 - 2.3 0.7 sk 28.2 10.7 21.4 15.5 - 1.0 19.2 5.0 0.5 4.7 0.1 4.2 5.3 - 4.4 - 3.6 sl 4.0 11.1 19.5 8.6 - 0.8 13.2 4.8 0.4 6.0 0.1 4.5 6.7 - 1.1 1.7 - ## 4.1 Experimental Setup The training and evaluation pipeline of speech-tospeech translation is shown in Figure 2. Recent progress in speech-to-speech modeling suggests to discretize the target speech waveform into a unit sequence, relieving models from the complexity of predicting continuous waveform values. We borrow the idea of training speech-to-unit (S2U) model where units are pre-generated from target speech with a pre-trained HuBERT model (Lee et al., 2022a). During S2U training, models are periodically evaluated on the valid set of speechto-unit samples, and the best checkpoint with the lowest valid loss is saved for model inference. When it comes to inference, speech could be synthesized from the predicted units with a vocoder, as the output of the S2S pipeline. It is then transcribed into texts by an off-the-shelf ASR model. The BLEU score is calculated by comparing the transcriptions against the ground truth target texts, which serves as the quantitative metric of mined data quality. We note that the ASR BLEU score is not a perfect metric for data quality, as it is unavoidably affected by the quality of ASR models. Next we discuss each module of the pipeline. Speech-to-Unit. The S2U model takes the source speech and predicts a sequence of target units. It typically has an encoder-decoder architecture, where the encoder consists of convolutional and Transformer encoder layers, and the decoder is a Transformer decoder. We have experimented with different model variants, and discuss bilingual and multilingual training in section 5 and section 6. HuBERT. HuBERT is used to extract speech features of audio frames, which are then grouped into k-means clusters. The continuous features are thus mapped to corresponding clusters. In this way, speech could be discretized into unit sequence where units are basically indices of clusters. We reuse the same HuBERT model and k-means clusters for English, Spanish and French as in (Lee et al., 2022b) for a fair comparison with existing results. We also train multilingual HuBERT models to cover other languages in SpeechMatrix, and more HuBERT training details can be found in Appendix B.1. Vocoder. Unit-based HiFi-GAN vocoders are trained to synthesize speech from unit sequence (Polyak et al., 2021). In our experiments, vocoders are separately trained from S2U model. We train vocoders on three datasets: (1) CSS10 (Park and Mulc, 2019). It is a singlespeaker corpus which we use to train vocoders in German, Finnish, Hungarian and Dutch. (2) VoxPopuli (Wang et al., 2021a). Given its ASR data with speaker id, we sort speakers based on their speech duration, and keep adding the top speakers until the speech is more than 20 hours. (3) Common Voice (Ardila et al., 2020). Portuguese and Estonian are not covered by the two corpora above, and thus we turn to Common Voice. Again, we select top speakers and prepare 12-hour and 10-hour speech for the vocoder training in Portuguese and Estonian respectively. Data preprocessing and training are included in Appendix B.3. ASR. We use off-the-shelf ASR models to transcribe the speech generated by vocoders. Details about the ASR models and their benchmark results of word error rates are provided in Appendix B.2. ## 5 Bilingual Speech-To-Speech Baselines In this part, we discuss the bilingual S2S models trained in each of 272 language directions in SpeechMatrix. The architecture of Textless model is used for bilingual translation in our experiments (Lee et al., 2022a). A Textless model consists of a speech encoder, Transformer encoder and decoder. Training. For a given direction, we extract units for source and target speech with their corresponding HuBERT models (Hsu et al., 2021). Taking source speech, the model is trained to predict target unit sequence with cross-entropy loss as well as source unit reconstruction as an auxiliary task. For the training efficiency of extensive S2ST experiments, we use a subset of mined data as the train set. Mined samples are selected if their alignment scores are above a preset threshold. We performed an analysis of the threshold selection in subsection B.4. Comparison with existing results. Since we adopt the same model as the previous work (Lee et al., 2022a) and the only difference lies in the train set, it is straightforward to compare with existing results. Table 4 shows the results of S2ST models which are trained on our SpeechMatrix mined data compared to VoxPopuli S2S data in each of four language directions: es-en, fr-en, en-es and enfr. The threshold of mined data is set as 1.09 to these four directions, yielding an average of 1, 436hour train set. Compared with 480-hour labeled speech from VoxPopuli, SpeechMatrix achieves an an average improvement of 5.4 BLEU, indicating the good quality and usefulness of the mined data. ## 5.1 Large-Scale Bilingual Evaluation A large-scale evaluation is launched covering 272 mined languages directions, and bilingual models are trained for each direction to establish baseline results in speech-to-speech translation. Table 5 and Table 6 summarize performance of bilingual S2ST models on three test sets. In each direction, Table 5 reports BLEU scores in European Parliament domain, either EPST or VoxPopuli set. EPST BLEU is underlined to be distinguished from VoxPopuli BLEU. Table 6 reports BLEU in Wikipedia domain, i.e., FLEURS test data. Bilingual results. Empirically we find that translations into high-resource languages such as en, es and fr outperform those into low-resource languages such as lt and sl based on the speech amount of these languages in Table 3. Another observation is the performance difference across test domains, and BLEU on FLEURS is lower than that on EPST and VoxPopuli data, likely because of the domain mismatch between train and test data. It is also found that translation results are not symmetric for some language pairs, for example, ro-en has a BLEU of 22.6 while en-ro BLEU is only 7.6 on EPST. Besides different complexity levels of target languages and test sets, such asymmetry also results from the dependency of BLEU scores on the speech synthesis quality of the vocoder and transcription quality of the ASR model. For languages whose vocoder and ASR models are not good, they are likely to receive low BLEU scores. In this case, Romanian vocoder and ASR are not as strong as English models as reflected by its higher word error rate in speech resynthesis as reported in Appendix B.3. ## 6 Multilingual Speech-To-Speech Translation Multilingual modeling has been explored in tasks of language understanding and machine translation, demonstrating knowledge transfer among languages. However, to our best knowledge, there are few studies of multilingual S2ST on real speech, partially due to the lack of multilingual speech-tospeech resources. With the massively multilingual data we have mined, we are able to explore multilingual S2ST training. In this work, we focus on many-to-English translation, studying the translation from 6 Slavic languages to English in subsection 6.1 and the translation from all 16 languages in SpeechMatrix to English in subsection 6.2. English-to-many or manyto-many translation are left to future work. We present here multilingual models used in our experiments (more details can be found in Appendix C: (1) **Textless model**. The same model with 70M parameters that we use for bilingual evaluation is reused in the multilingual experiments. Given diverse multilingual data, we increase the model size for larger model capacity, trying multilingual models with 70M and 260M parameters. cs de en es et fi fr hr hu it lt nl pl pt ro sk sl cs - 2.0 4.2 4.6 0.1 0.2 7.5 2.1 0.2 2.5 0.1 1.0 2.3 2.8 1.4 3.5 1.7 de 2.3 - 8.3 3.8 0.1 0.2 6.5 2.2 0.2 1.8 0.0 1.2 0.9 3.1 2.1 0.8 1.0 en 2.7 2.7 - 6.0 0.7 0.6 10.4 2.4 0.3 3.6 0.1 3.8 1.3 5.1 2.0 1.2 1.2 es 1.9 1.8 7.5 - 0.1 0.2 9.2 1.0 0.2 4.2 0.1 1.5 1.4 5.9 2.3 0.9 0.8 et 2.1 0.7 8.2 3.0 - 0.7 6.3 1.0 0.7 2.3 0.1 1.5 1.2 1.7 1.4 0.4 0.8 fi 1.5 0.9 5.5 3.8 0.5 - 6.2 0.5 0.0 1.2 0.0 0.8 1.2 2.0 1.1 0.7 0.7 fr 1.5 2.1 9.8 7.6 0.1 0.2 - 1.7 0.2 3.1 0.1 1.3 1.5 5.8 2.4 0.6 0.6 hr 2.5 0.9 7.7 3.1 0.2 0.1 5.8 - 0.2 1.1 0.0 0.9 1.1 2.0 0.6 0.9 0.8 hu 1.3 1.0 4.6 3.0 0.1 0.2 5.7 0.7 - 1.2 0.0 0.1 0.4 2.3 0.9 0.2 0.3 it 1.3 1.0 6.3 8.3 0.1 0.1 11.3 1.3 0.2 - 0.0 0.9 1.1 5.6 1.9 0.4 0.6 lt 0.1 0.0 0.9 0.2 0.0 0.0 0.2 0.0 0.0 0.4 - 0.1 0.0 0.0 0.0 0.0 0.0 nl 1.4 3.1 5.7 4.9 0.2 0.2 7.5 1.8 0.2 1.7 0.0 - 0.9 3.3 1.4 0.4 1.0 pl 1.6 1.6 4.9 4.4 0.1 0.2 5.4 1.2 0.1 1.5 0.0 0.3 - 2.5 1.2 1.1 0.7 pt 1.2 1.0 6.1 8.7 0.1 0.3 11.1 1.1 0.1 1.1 0.1 0.6 0.8 - 1.5 0.6 0.6 ro 1.9 2.2 7.8 7.0 0.4 0.3 11.3 0.9 0.2 3.8 0.1 0.9 1.1 6.0 - 0.7 0.2 sk 9.1 2.1 5.5 5.1 0.3 0.2 7.8 3.0 0.4 2.1 0.0 0.7 1.9 2.3 1.9 - 1.5 sl 2.2 2.0 7.3 3.4 0.2 0.3 4.5 1.1 0.1 1.2 0.0 1.0 1.2 1.5 0.1 0.3 - (2) **XM Transformer**. Inspired by the recent finding that crossmodal pre-training is beneficial for speech translation (Popuri et al., 2022), we apply XM Transformer to multilingual training, whose encoder is initialized from pre-trained XLSR model with 1B parameters (Babu et al., 2021) and decoder is initialized from a unit decoder pretrained in an mBART style (Popuri et al., 2022). With multilingual speech-to-unit data, the model is further finetuned to minimize the cross-entropy loss in target unit prediction. (3) XM Transformer **with Sparsity**. Sparse modeling, in particular Mixture-of-Experts (MoE), has been widely studied in multilingual machine translation. MoE increases the number of parameters without sacrificing computation efficiency. GShard. GShard is a sparse scaling technique proposed in (Lepikhin et al.). We replace every other Transformer layer with an MoE layer. FFN modules in an MoE transformer layer are shared across experts. A learnable gating function routes input tokens to different experts (NLLB Team et al., 2022). We apply GShard architecture on the decoder of XM Transformer, and expert weights are all initialized with the pretrained unit mBART. ## 6.1 Slavic-To-English Translation The six Slavic languages include Czech (cs), Croatian (hr), Lituanian (lt), Polish (pl), Slovak (sk), and Slovenian (sl). In the multilingual setting, all mined data into English are combined from each ## Slavic Language As The Train Set. We summarize ASR BLEU scores of different models averaged over six Slavic-to-English directions in Table 7. For completeness, we report BLEU of each direction in Appendix C. As is shown, Textless model benefits from the parameter increase to 260M, and multilingual training further brings BLEU gains of 5.6 and 4.7 in EP/VP and FLEURS. We tried larger models than 260M but didn't see more gains. Comparing against bilingual Textless model (70M), bilingual XM Transformer achieves +3.8 BLEU in EP/VP and +5.0 BLEU in FLEURS. Multilingual training further improves dense XM Transformer by 7.9 and 5.1 BLEU. GShard with 64 experts brings +1.0 BLEU over dense XM Transformer to EP/VP, and +0.3 BLEU to FLEURS. Overall the best Slavic-to-English translation is achieved by XM Transformer with GShard trained in multilingual setting. This demonstrates that multilinguality, pre-training and model sparsity are of help to speech-to-speech translation. | Bilingual | Multilingual | | | | | | | | |-------------|----------------|--------------|---------------|-------|------|-------|------|------| | EP/VP | FL | EP/VP | FL | EP/VP | FL | EP/VP | FL | | | Textless | 70M | 260M | 70M | 260M | | | | | | Avg. | 14.3 | 5.1 | 16.8 | 6.5 | 14.1 | 2.5 | 22.4 | 11.2 | | XM | Dense(1.2B) | Dense (1.2B) | GShard (4.3B) | | | | | | | Avg. | 18.1 | 10.1 | 26.0 | 15.2 | 27.0 | 15.5 | | | | Dense (1.2B) | GShard (4.3B) | | | | |----------------|-----------------|-------|------|------| | EP/VP | FL | EP/VP | FL | | | cs | 29.9 | 18.7 | 30.9 | 18.2 | | de | 18.8 | 19.0 | 19.3 | 20.3 | | es | 22.8 | 15.2 | 23.3 | 15.9 | | et | - | 16.7 | - | 16.7 | | fi | 26.8 | 14.1 | 28.2 | 14.0 | | fr | 23.5 | 18.3 | 24.1 | 18.9 | | hr | - | 16.6 | - | 16.8 | | hu | 20.2 | 12.0 | 21.3 | 12.5 | | it | 36.3 | 16.2 | 37.8 | 14.9 | | lt | 21.9 | 9.8 | 23.8 | 10.3 | | nl | 21.4 | 16.4 | 22.1 | 17.3 | | pl | 21.2 | 12.4 | 21.3 | 13.4 | | pt | 23.8 | 21.8 | 24.2 | 22.3 | | ro | 25.1 | 19.7 | 25.0 | 19.8 | | sk | 30.8 | 19.6 | 32.2 | 18.2 | | sl | 28.3 | 13.7 | 29.9 | 13.7 | | avg | 25.1 | 16.3 | 26.0 | 16.5 | ## 6.2 All-To-English Translation We move forward to a larger-scale multilinguality by extending from Slavic language family to all languages in SpeechMatrix. We adopt the best models in Slavic-to-English translation, i.e., multilingual XM Transformer with both dense and sparse architectures. Results. Compared with XM Transformer (1.2B) dense model, MoE-GShard64 (4.3B) with the same forward computation time brings gains of +0.9 and +0.2 BLEU to EP/VP and FLEURS respectively. Similar to our findings in Slavic-toEnglish setting, increasing the capacity with sparse modeling benefits in-domain (EP/VP) more than out-of-domain FLEURS test set. Given sparse architecture of XM Transformer with GShard, all-to-English model shows +0.6 and -0.4 BLEU difference compared with Slavic-toEnglish model on EP/VP and FLEURS respectively, averaged over Slavic languages. Multilingual sparse model benefits from the additional indomain data in other languages when evaluated in EP/VP domain, while sees performance degradation in out-of-domain data. ## 7 Limitations And Risks Limitations. The HuBERT model quality is critical to speech-to-speech translation performance, as its extracted units are used by both speech-tounit model and vocoder. We have not explored the optimal strategy of multilingual HuBERT training. One research question is how to choose a group of languages so that a multilingual HuBERT model could be well trained. For example, it is arguable whether Lithuanian (lt) should be included in Slavic or Uralic family. Other questions could be whether a larger HuBERT with more model capacity should be used and how we should deal with language imbalance in multilingual training. We provide benchmark results of bilingual speech translation with mined data selected by heuristics. One of our future directions is to come up with a better strategy of mined data selection to improve translation performance and training efficiency. As mentioned in our results analysis, the reported BLEU scores are heavily dependent on the ASR quality, which may not reflect the speech translation performance accurately. Future directions could be improving ASR quality or exploring other evaluation metrics without reliance on ASR models. Potential Risks. As a technology used for speech generation, the presented speech translation models or the translation models that will be trained with SpeechMatrix dataset might have systemic bias or produce inappropriate outputs. ## 8 Conclusion In this paper, we introduce a large-scale multilingual speech-to-speech corpus mined from VoxPopuli. It is the largest resource of speech alignments with a coverage of 17 languages. We perform an extensive evaluation of the mined parallel speech, showing good quality of the speech alignments. Multilingual speech-to-speech models can be efficiently trained on this corpus and we suggest different methods, such as sparse scaling using Mixture-of-Experts, to further boost translation performance in the multilingual setting. ## References Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the Use of Comparable Corpora to Improve SMT performance. In *EACL*, pages 16–23. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 4218–4222. European Language Resources Association. Mikel Artetxe and Holger Schwenk. 2018. Marginbased Parallel Corpus Mining with Multilingual Sentence Embeddings. https://arxiv.org/ abs/1811.01136. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. XLS-R: self-supervised cross-lingual speech representation learning at scale. *CoRR*, abs/2111.09296. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information* Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Marcely Zanon Boito, William Havard, Mahault Garnerin, Éric Le Ferrand, and Laurent Besacier. 2020. Mass: A large and clean multilingual corpus of sentence-aligned spoken utterances extracted from the bible. In *Proceedings of The 12th Language* Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 6486– 6493. European Language Resources Association. Houda Bouamor and Hassan Sajjad. 2018. H2@BUCC18: Parallel Sentence Extraction from Comparable Corpora Using Multilingual Sentence Embeddings. In *BUCC*. Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2022. FLEURS: few-shot learning evaluation of universal representations of speech. *CoRR*, abs/2205.12446. Alexandre Defossez, Gabriel Synnaeve, and Yossi Adi. 2020. Real time speech enhancement in the waveform domain. In *Interspeech*. Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2019. MuST-C: a Multilingual Speech Translation Corpus. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2012–2017, Minneapolis, Minnesota. Association for Computational Linguistics. Quoc Truong Do, Sakriani Sakti, Graham Neubig, Tomoki Toda, and Satoshi Nakamura. 2015. Improving translation of emphasis with pause prediction in speech-to-speech translation systems. In Proceedings of the 12th International Workshop on Spoken Language Translation: Papers, IWSLT 2015, Da Nang, Vietnam, December 3-4, 2015. Paul-Ambroise Duquenne, Hongyu Gong, Benoît Sagot, and Holger Schwenk. 2022. T-modules: Translation modules for zero-shot cross-modal machine translation. In *EMNLP*. Paul-Ambroise Duquenne, Hongyu Gong, and Holger Schwenk. 2021. Multimodal and multilingual embeddings for large-scale speech mining. *Advances in* Neural Information Processing Systems, 34:15748– 15761. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Languageagnostic bert sentence embedding. *arXiv preprint* arXiv:2007.01852. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, PengJen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. *Trans. Assoc. Comput. Linguistics*, 10:522–538. Kevin Heffernan, Onur Çelebi, and Holger Schwenk. 2022. Bitext mining using distilled sentence representations for low-resource languages. In *EMNLP*. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *IEEE ACM Trans. Audio Speech* Lang. Process., 29:3451–3460. Javier Iranzo-Sánchez, Joan Albert Silvestre-Cerdà, Javier Jorge, Nahuel Roselló, Adrià Giménez, Albert Sanchís, Jorge Civera, and Alfons Juan. 2020. Europarl-st: A multilingual corpus for speech translation of parliamentary debates. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 8229–8233. IEEE. Ye Jia, Yifan Ding, Ankur Bapna, Colin Cherry, Yu Zhang, Alexis Conneau, and Nobuyuki Morioka. 2022a. Leveraging unsupervised and weaklysupervised data to improve direct speech-to-speech translation. *CoRR*, abs/2203.13339. Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2022b. Translatotron 2: Highquality direct speech-to-speech translation with voice preservation. In *International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA*, volume 162 of Proceedings of Machine Learning Research, pages 10120–10134. PMLR. Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. 2022c. CVSS corpus and massively multilingual speech-to-speech translation. *CoRR*, abs/2201.03713. Ye Jia, Ron J. Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. 2019. Direct speech-to-speech translation with a sequence-to-sequence model. In *Interspeech 2019,* 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 1123–1127. ISCA. Sameer Khurana, Antoine Laurent, and James Glass. 2022. Samu-xlsr: Semantically-aligned multimodal utterance-level cross-lingual speech representation. arXiv preprint arXiv:2205.08180. Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Pino, and Wei-Ning Hsu. 2022a. Direct speech-to-speech translation with discrete units. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3327–3339. Association for Computational Linguistics. Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Yossi Adi, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. 2022b. Textless speech-to-speech translation on real data. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022, pages 860–872. Association for Computational Linguistics. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers: Simplifying training of large, sparse models. In *International Conference on Machine Learning*, pages 6265–6274. PMLR. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving Machine Translation Performance by Exploiting Non-Parallel Corpora. *Computational Linguistics*, 31(4):477–504. Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Gen-ichiro Kikui, Hisashi Kawai, Takatoshi Jitsuhiro, Jinsong Zhang, Hirofumi Yamamoto, Eiichiro Sumita, and Seiichi Yamamoto. 2006. The ATR multilingual speech-to-speech translation system. *IEEE Trans. Speech Audio* Process., 14(2):365–376. Xuan-Phi Nguyen, Sravya Popuri, Changhan Wang, Yun Tang, Ilia Kulikov, and Hongyu Gong. 2022. Improving speech-to-speech translation through unlabeled text. *CoRR*, abs/2210.14514. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling humancentered machine translation. Kyubyong Park and Thomas Mulc. 2019. CSS10: A collection of single speaker speech datasets for 10 languages. In Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pages 1566–1570. ISCA. Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech Resynthesis from Discrete Disentangled SelfSupervised Representations. In Proc. Interspeech 2021. Sravya Popuri, Peng-Jen Chen, Changhan Wang, Juan Pino, Yossi Adi, Jiatao Gu, Wei-Ning Hsu, and Ann Lee. 2022. Enhanced direct speech-to-speech translation using self-supervised pre-training and data augmentation. pages 5195–5199. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2014. Fisher and callhome spanish–english speech translation. LDC2014T23. Web Download. Philadelphia: Linguistic Data Consortium. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *EMNLP*, pages 4512– 4525. Philip Resnik. 1999. Mining the Web for Bilingual Text. In ACL. Elizabeth Salesky, Matthew Wiesner, Jacob Bremerman, Roldano Cattoni, Matteo Negri, Marco Turchi, Douglas W. Oard, and Matt Post. 2021. The multilingual tedx corpus for speech recognition and translation. In *Interspeech 2021, 22nd Annual Conference* of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 3655–3659. ISCA. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021a. CCMatrix: Mining billions of high-quality parallel sentences on the web. In ACL, pages 6490– 6500. Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021b. Ccmatrix: Mining billions of high-quality parallel sentences on the web. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 6490–6500. Association for Computational Linguistics. Silero-Team. 2021. Silero vad: pre-trained enterprisegrade voice activity detector (vad), number detector and language classifier. https://github.com/ snakers4/silero-vad. Changhan Wang, Juan Pino, Anne Wu, and Jiatao Gu. 2020. Covost: A diverse multilingual speech-to-text translation corpus. In *Proceedings of the 12th Language Resources and Evaluation Conference*, pages 4197–4203. Changhan Wang, Morgane Rivière, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Miguel Pino, and Emmanuel Dupoux. 2021a. Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 993–1003. Association for Computational Linguistics. Changhan Wang, Anne Wu, Jiatao Gu, and Juan Pino. 2021b. Covost 2 and massively multilingual speech translation. In *Interspeech*, pages 2247–2251. Yinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Qinlan Shen Mandy Guo, Daniel Cer, Brian Strope Yun-hsuan Sun and, and Ray Kurzweil. 2019. Improving multilingual sentence embedding using bidirectional dual encoder with additive margin softmax. In *IJCAI*, pages 5370–5378. Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. Designing effective sparse expert models. *arXiv preprint arXiv:2202.08906*. ## A Speech Encoder A.1 Similarity Search On Covost We compared our similarity search results with previous work (Duquenne et al., 2021) in Table 9. We notice that our new speech encoders have lower error rates compared to previous work. | Audio vs. en translations | de | es | fr | |-----------------------------|------|------|------| | Previous work | 3.36 | 1.66 | 2.05 | | This work | 3.27 | 1.26 | 1.55 | Table 9: Similarity search error rates (in %) on CoVoST 2 test set. We also provide similarity search of audios against written translations or transcriptions on CoVoST 2 test set for other languages covered by our speech encoders in Table 10, in order to evaluate cross-modal similarity search. | de | en | es | et | fr | it | nl | pt | sl | | |----------------------------------------|------|------|------|------|------|------|------|------|------| | # test sentences | 14k | 16k | 13k | 2k | 15k | 9k | 2k | 4k | 0.4k | | Audio vs. transcriptions | 1.4 | 2.9 | 0.4 | 0.1 | 0.5 | 0.5 | 1.0 | 1.1 | 1.7 | | vs. en translations | 3.3 | - | 1.3 | 1.0 | 1.5 | 1.7 | 4.4 | 1.9 | 4.4 | | Text transcription vs. en translations | 2.0 | - | 1.0 | 0.1 | 1.0 | 1.3 | 2.4 | 0.7 | 0.8 | Table 10: Similarity search error rates (in %) on CoVoST 2 test set. ## B **Bilingual Speech-To-Speech Translation** We describe experiment details of bilingual speechto-speech translation. | Family | Languages | |----------|------------------------| | Romance | es, fr, it, pt, ro | | Slavic | cs, pl, sk, sl, hr, lt | | Germanic | en, de, nl | | Uralic | fi, et, hu | ## B.1 Hubert Table 11: Language families in VoxPopuli data. We train a multilingual HuBERT model for each family on the collection of speech in each component language as shown in Table 11. We collect unlabeled VoxPopuli speech for all languages of the same family as the training data. The HuBERT model consists of 7 convolutional layers and 12 Transformer encoder layers. Each encoder layer has 12 attention heads, the embedding dimension of 768 and the forward dimension of 3072. Models are trained for 3 iterations, and in each iteration pseudo-labels are prepared as the training target for | Lang | cs | de | |--------|-----------------------------------------------|-------------------------------------------------| | ASR | comodoro/wav2vec2-xls-r-300m-cs-250 | jonatasgrosman/wav2vec2-xls-r-1b-german | | Lang | et | fi | | ASR | RASMUS/wav2vec2-xlsr-1b-et | jonatasgrosman/wav2vec2-large-xlsr-53-finnish | | Lang | hr | hu | | ASR | classla/wav2vec2-xls-r-parlaspeech-hr | jonatasgrosman/wav2vec2-large-xlsr-53-hungarian | | Lang | it | lt | | ASR | jonatasgrosman/wav2vec2-large-xlsr-53-italian | sammy786/wav2vec2-xlsr-lithuanian | | Lang | nl | pl | | ASR | jonatasgrosman/wav2vec2-xls-r-1b-dutch | jonatasgrosman/wav2vec2-xls-r-1b-polish | | Lang | pt | ro | | ASR | jonatasgrosman/wav2vec2-xls-r-1b-portuguese | gigant/romanian-wav2vec2 | | Lang | sk | sl | | ASR | anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm | anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm | utterances. In the first iteration, the target labels are MFCC features. In the second iteration, we extract speech features from the 6-th layer of the trained HuBERT model and apply k-means clustering to derive a set of 500 labels. In the third iteration, speech features from the 9-th layer are clustered into 500 labels. Lastly after these three iterations, we try feature extraction from different layers including layer 10, 11 and 12 of trained HuBERT. As for feature clustering, we also try different numbers of clusters, 800, 1000 and 1200, to derive multiple sets of target units. To choose the optimal setup, we launch a resynthesis evaluation to select the HuBERT layer to extract speech features and the number of k-means clusters. We train a vocoder on each set of target units, i.e., vocoder takes the units and synthesizes target speech. The synthesized speech is sent to off-the-shelf ASR models, and Word Error Rate (WER) is reported to measure the speech quality. The resynthesis experiments are discussed in subsection B.3. The optimal HuBERT layer and label size is selected if their corresponding vocoder achieves the lowest WER. ## B.2 Asr Models We use ASR models publicly released on HuggingFace to transcribe the generated speech in order to calculate WER or BLEU scores in comparison with ground truth texts. ASR models used in our evaluation are listed in Table 12. ## B.3 Vocoder Data preprocessing. We applied a denoiser4(Defossez et al., 2020) to the speech of VoxPopuli and Common Voice as the speech preprocessing to increase signal-to-noise ratio (SNR) given that they are noisier than CSS10 audios. Then we prepare vocoder labels with HuBERT models generating k-means cluster labels for each utterance. Single-speaker vocoders are trained in CSS10, and languages from VoxPopuli and Common Voice have multi-speaker vocoders where speaker embeddings are learned. During inference, we select the speaker with the longest speech duration to synthesize speech from predicted unit sequences, who has the most data for the vocoder to learn good speaker embeddings. Vocoder training and evaluation. Vocoders are trained to synthesize speech from a given sequence of units. The train sets are speech data from CSS10, VoxPopuli and Common Voice. As mentioned before, units are derived from HuBERT models for these speech. Table 13 summarizes WER of ASR models, which reflects the transcription quality in each language. Besides, we report the training data, vocoder WER of synthesized speech from vocoders, and here we include the vocoder results obtained from the optimal HuBERT layer and kmeans cluster size. Layer 11 is the best HuBERT layer for feature extraction in all languages, and most languages have the best k-means size of 1000 except Italian (it) whose best label size is 800. As shown in Table 13, ASR models are of good quality for high-resource languages such as de, fi 4https://github.com/facebookresearch/ denoiser | Lang | Data | ASR WER | HuBERT | Vocoder WER | Lang | Data | ASR WER | HuBERT | Vocoder WER | |--------|--------------|---------------------------------|-----------------------------------|---------------|-----------|-----------|---------------|-----------------------------------|---------------| | de | CSS10 | 0.10 | Germanic HuBERT layer 11, km 1000 | 0.16 | nl | CSS10 | 0.19 | Germanic HuBERT layer 11, km 1000 | 0.27 | | fi | CSS10 | 0.02 | Uralic HuBERT layer 11, km 1000 | 0.15 | hu | CSS10 | 0.21 | Uralic HuBERT layer 11, km 1000 | 0.21 | | et | Common Voice | 0.14 | Uralic HuBERT layer 11, km 1000 | 0.44 | it | VoxPopuli | 0.23 | Uralic HuBERT layer 11, km 800 | 0.27 | | Voice | 0.06 | Uralic HuBERT layer 11, km 1000 | 0.31 | ro | VoxPopuli | 0.42 | Uralic HuBERT | | | | pt | Common | layer 11, km 1000 | 0.50 | | | | | | | | cs | VoxPopuli | 0.15 | Slavic HuBERT layer 11, km 1000 | 0.23 | pl | VoxPopuli | 0.14 | Slavic HuBERT layer 11, km 1000 | 0.23 | | hr | VoxPopuli | 0.21 | Slavic HuBERT layer 11, km 1000 | 0.29 | lt | VoxPopuli | 0.38 | Slavic HuBERT layer 11, km 1000 | 0.57 | | sk | VoxPopuli | 0.28 | Slavic HuBERT layer 11, km 1000 | 0.41 | sl | VoxPopuli | 0.37 | Slavic HuBERT layer 11, km 1000 | 0.46 | ![13_image_0.png](13_image_0.png) and pt, while suffering from high error rates in languages such as ro, lt and sl. It is expected to have higher vocoder WER than ASR WER since the former is for synthesized speech. By measuring the gap between the two error rates, we can tell how good a vocoder is and also infer the quality of HuBERT units. For et, pt and lt, the gaps are obviously larger than other languages. It not surprising since we do not have much good-quality vocoder data for these languages. For example, there is only around 10-hour noisy speech from Common Voice for et and pt vocoder training. ## B.4 Training Textless model. A Textless model consists of a speech encoder with 2 convolution layers and 12 Transformer encoder layers. Transformer layer has the embedding dimension of 512 and the forward dimension of 2048. It has two unit decoders with 6 and 2 Transformer decoder layers for target and source unit prediction respectively. The target unit decoder has the embedding dimension of 512 and the forward dimension of 2048, and the source unit decoder's dimensions are 256 and 2048. Hyperparameters. We tried learning rates of 0.0003 and 0.0005, and dropout rates of 0.1 and 0.3. The best setup is a learning rate of 0.0005 and a dropout of 0.3 for bilingual Textless model training. Bilingual models are trained with a batch of 20k tokens for 400k steps. A label smoothing weight of 0.2 is applied to the cross-entropy loss. As for decoding of speech-to-unit models, we set the beam size of 10 in all bilingual and multilingual experiments. Mined data selection. We performed an analysis of translation performance varying with thresholds from 1.06 to 1.09 on three language pairs: es-en, ro-en and hr-en. Figure 3 shows the threshold, the corresponding speech data size and BLEU score. For low-resource directions such as hr-en, it is best to include all the mined data. For high- and medium-resource directions, es-en and ro-en, the optimal amount of mined data is around 1k hours and it does not bring further gains to go beyond that data size. Given these observations, we choose the highest threshold that keeps the source speech duration in mined data more than 1k hour for each direction. For example, we use a threshold of 1.09 for es-en and of 1.06 for hr-en. Computation. Each bilingual model is trained on 16 A100 GPUs for 3 days on average. ## C Multilingual Speech-To-Speech Translation We provide details of models and experiment setups in multilingual speech-to-speech translation. ## C.1 Slavic-To-English Translation Textless model. Textless model (260M) has a speech encoder with 4 convolution layers and 12 Transformer encoder layers with the embedding dimension of 1024 and the forward dimension of 4096. It has two unit decoders with 6 and 2 Transformer decoder layers for target and source unit prediction respectively. The target unit decoder has the embedding dimension of 1024 and the forward dimension of 4096, and the source unit decoder's dimensions are 256 and 2048. For the Textless model (424M), its speech encoder contains 6 convolution layers and 16 Transformer encoder layers with the embedding dimension of 1024 and the forward dimension of 4096. It has two unit decoders with 12 and 2 Transformer decoder layers for target and source unit prediction respectively. The target unit decoder has the embedding dimension of 1024 and the forward dimension of 4096, and the source unit decoder's dimensions are 256 and 2048. XM Transformer. XM Transformer (1.2B) is initialized from XLS-R encoder with 7 convolution layers and 48 Transformer encoder layers with the embedding dimension of 1280 and the forward dimension of 5120. Its unit decoder is initialized from a pre-trained mbart-style decoder with 12 layers, embedding dimension of 1024 and forward dimension of 4096. Hyperparameters. For Textless model, we reuse a learning rate of 0.0005, a dropout of 0.3 | Bilingual | Multilingual | | | | | | | | | | |-------------|----------------|-------|------|-------|------|-------|------|-------|------|------| | 70M | 260M | 70M | 260M | 424M | | | | | | | | EP/VP | FL | EP/VP | FL | EP/VP | FL | EP/VP | FL | EP/VP | FL | | | cs | 22.7 | 4.2 | 24.7 | 11.2 | 19.7 | 2.3 | 27.5 | 13.7 | 25.3 | 10.2 | | hr | - | 7.7 | - | 4.6 | - | 3.1 | - | 12.8 | - | 9.2 | | lt | 3.1 | 0.9 | 0.2 | 0.0 | 2.8 | 0.3 | 14.7 | 4.8 | 10.7 | 3.3 | | pl | 4.9 | 4.9 | 17.6 | 7.7 | 14.4 | 1.9 | 19.9 | 9.5 | 16.4 | 6.9 | | sk | 21.4 | 5.5 | 24.4 | 11.0 | 18.9 | 4.1 | 27.2 | 15.4 | 24.9 | 11.1 | | sl | 19.5 | 7.3 | 16.9 | 4.7 | 14.6 | 3.1 | 22.9 | 10.7 | 21.0 | 7.6 | | avg | 14.3 | 5.1 | 16.8 | 6.5 | 14.1 | 2.5 | 22.4 | 11.2 | 19.7 | 8.1 | | Bilingual (1.2B) | Multiling. Dense (1.2B) | Multiling. GShard (4.3B) | | | | | |--------------------|---------------------------|----------------------------|------|-------|------|------| | EP/VP | FL | EP/VP | FL | EP/VP | FL | | | cs | 28.3 | 17.8 | 29.7 | 18.2 | 30.6 | 19.3 | | hr | - | 12.1 | - | 17.1 | - | 17.6 | | lt | 0.0 | 0.0 | 20.9 | 9.6 | 22.2 | 10.2 | | pl | 17.4 | 7.4 | 21.1 | 12.9 | 21.4 | 12.6 | | sk | 24.7 | 14.5 | 30.8 | 19.3 | 31.8 | 20.0 | | sl | 20.1 | 8.5 | 27.4 | 14.0 | 29.1 | 13.0 | | avg | 18.1 | 10.1 | 26.0 | 15.2 | 27.0 | 15.5 | ![14_image_0.png](14_image_0.png) and a label smoothing weight of 0.2 for Slavic-toEnglish training. The 70M model has 20k tokens in one batch. The 260M model has batch tokens of 6k and a update frequency of 4. The 424M model has tokens of 4000 and a update frequency of 6. For XM Transformer model, we use a learning rate of 0.0001, a dropout of 0.1 and a label smoothing weight of 0.2. In a batch, token sizes of 1500 and 9000 with update frequency of 15 and 2 are used for V100 and A100 training respectively. Results. We first extend Textless model from the bilingual to multilingual setting. Translation results are presented for Textless models with different parameter sizes in Table 14. Multilingual Textless model works best with 260M parameters. Compared with its bilingual counterparts, an average gain of 5.6 BLEU is achieved in EP/VP and the gain of 4.7 BLEU in FLEURS. With the Textless model size fixed as 70M, multilingual training hurts the performance of most languages compared with bilingual training. This is due to the insufficient model capacity, and the language interference is reflected by an average of −2.6 BLEU in FLEURS. We increase model parameters to 260M in both bilingual and multilingual settings. With a larger model capacity, bilingual models achieve gains in high-resource languages including cs, pl and sk, while suffering from performance loss in low-resource directions such as hr, lt and sl. Given model sizes of 260M, we observe consistent gains of multilingual models over the bilingual models across different language directions and test domains. An average gain of 5.6 BLEU is achieved in EP/VP and the gain of 4.7 BLEU in FLEURS. It demonstrates the positive transfer enabled by multilingual training. As the multilingual model size continues to increase to 424M, we don't observe further gains likely due to the bottleneck of training data amount. XM Transformer leveraging pre-trained modules is also trained on Slavic-to-English data. Pre-training is shown to be beneficial, and results are reported in Table 15. Comparing against bilingual Textless model (70M), bilingual XM Transformer outperforms it in all directions except lt-en. The gain in EP/VP is 3.8 BLEU on average, and a larger gain of 5.0 BLEU is achieved in FLEURS. Multilingual training brings further gains to XM Transformer with +7.9 and +5.1 BLEU over bilingual training in EP/VP and FLEURS test set respectively. Comparing against dense XM Transformer, GShard with 64 experts has 1.0 BLEU gains on average over 5 directions on EP/VP, and +0.3 BLEU gains for FLEURS. We believe that it is due to a phenomena mentioned in (Zoph et al., 2022), i.e., MoE specializes in multilingual settings but not by language. GShard in our setting brings larger improvements to in-domain test sets. Computation. Textless models used 32 A100 GPUs, the 70M model was trained for 3 days, the 260M model was for 5 days, and the 424M model was for 6 days. It took 2 days to train XM Transformer on 32 A100 GPUs for Slavic-toEnglish translation. ## C.2 All-To-English Translation Dense (1.2B) MoE-GShard64 (4.3B) Base Layer (1.7B) EP/VP FL EP/VP FL EP/VP FL cs 29.9 18.7 30.9 18.2 29.9 17.3 de 18.8 19.0 19.3 20.3 19.4 19.5 es 22.8 15.2 23.3 15.9 22.9 14.9 et - 16.7 - 16.7 - 16.4 fi 26.8 14.1 28.2 14.0 28.5 13.9 fr 23.5 18.3 24.1 18.9 23.4 18.2 hr - 16.6 - 16.8 - 16.3 hu 20.2 12.0 21.3 12.5 20.5 12.1 it 36.3 16.2 37.8 14.9 37.4 14.0 lt 21.9 9.8 23.8 10.3 23.4 10.0 nl 21.4 16.4 22.1 17.3 21.5 16.6 pl 21.2 12.4 21.3 13.4 20.9 12.5 pt 23.8 21.8 24.2 22.3 23.8 21.1 ro 25.1 19.7 25.0 19.8 25.3 19.0 sk 30.8 19.6 32.2 18.2 31.5 18.4 sl 28.3 13.7 29.9 13.7 28.8 13.5 avg 25.1 16.3 26.0 16.5 25.5 15.9 In this work, we experiment with two variants of sparse modeling, GShard and Base Layer. XM Transformer-GShard. XM Transformer (1.2B) is initialized with the same XLS-R encoder and unit decoder used in Slavic-to-English experiments. On the decoder side of XM TransformerGShard, each expert is initialized with the same unit decoder. We set MoE frequency as 2, i.e., every other Transformer layer is an MoE layer. XM Transformer-Base Layer. For our XM Transformer with Base Layer sparsity (1.7B), the encoder is initialized with the same XLS-R encoder, and the dense layers of the decoder is initialized with the same unit decoder as GShard. We add an additional Base Layer which is randomly initialized as the 7th layer of decoder. There is one expert in each GPU and we used 64 GPUs in our experiments, which means we have 64 Base Layer experts in total. The sparse variant, Base Layer (1.7B) performs comparably to the dense XM Transformer, with an average of +0.4 BLEU in EP/VP test sets and -0.4 BLEU in FLEURS. The sparsity in Base Layer does not bring obvious gains to all-to-English translation. This is likely because we only add one Base Layer to the decoder with a small expert size. The number of increased model parameters is only 0.5B in Base Layer, while it is 3.1B in GShard. As suggested by (Lewis et al., 2021), the Base Layer performance might improve with more GPUs and a larger expert size. Hyperparameters. For dense XM Transformer, hyperparameters are the same as that for Slavic-toEnglish. GShard also shares the same set of hyperparameters. As for expert-specific parameters, we use 64 experts with each running on a single GPU with the frequency of 2 so that every other Transformer decoder layer becomes an MoE layer. The capacity token fraction is set as 0.5 so that if more than half of tokens in a sample get routed to one expert, extra tokens would overflow and get dropped. Computation. It took 3 days to train dense XM Transformer for all-to-English with 32 A100 GPUs. It took 5 days to train the GShard counterpart with 64 A100 GPUs. ## D License And Terms Of Scientific Artifacts D.1 Third-Party Artifacts Data. Common Voice is released under CC0 license, VoxPopuli and CoVoST 2 data are also under CC0 license. As for EuroParl, it is released under a Creative Commons license. The multilingual TEDx corpus is released under a CC BY-NCND 4.0 license. FLEURS dataset is under Creative Commons license (CC-BY-4.0). These datasets are publicly accessible and freely downloadable for research purposes. Models. XLS-R model used for the speech encoder initialization is open sourced under Apache-2.0 license. Text LASER used as the teacher model in training is released under BSD license. ASR models avaliable on HuggingFace are released under Apache-2.0 license. These models are publicly available. Code. The implementations of Textless model, XM Transformer, HuBERT and Vocoder are open sourced under MIT license. ## D.2 Speechmatrix And Translation Models The mined resource, SpeechMatrix, will be released under CC0 license, and the trained speechto-speech translation models will be released under CC BY-NC 4.0. The data and models are intended for research purposes. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations of this work is discussed in Section 7. ✓ A2. Did you discuss any potential risks of your work? Potential risks of this work are discussed in Section 7. ✓ A3. Do the abstract and introduction summarize the paper's main claims? This paper's main claims are summarized in abstract and introduction (Section 1). ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Sections 2-6. ✓ B1. Did you cite the creators of artifacts you used? SpeechMatrix is mined from VoxPopuli, a raw speech corpus, which is cited in Section 2. We are using public datasets in speech recognition and translation to train speech encoders, and the dataset creators are cited in Section 3.1. Speech translation datasets which are used as testsets for speechto-speech translation models are cited in Section 3.4. As for the model, speech encoder is initialized from XLS-R model, which is cited in Section 3.1. The code and implementations which are used for speech-to-speech translation in our experiments are cited in Section 4, 5 and 6. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? The license and terms are discussed in Appendix D. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Appendix D discusses the use of existing artifacts and our created artifacts. ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? The datasets used in this work such as Common Voice and CoVoST have anonymized speakers and require dataset users not to attempt to identify their identities. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We provide domain coverage and language coverage in Section 3. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. We report the data size of train, valid and test samples in Section 3. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ## C ✓ **Did You Run Computational Experiments?** Appendix B.4. ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? We report the parameter size and computational costs in Appendix B.4. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? We report the experiment results from single runs in Section 4, 5 and 6 of the main paper, as well as Appendix B and C. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? We report the evaluation setup and parameter setting in Appendix B and C. ## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
liu-etal-2023-character
Character-Aware Models Improve Visual Text Rendering
https://aclanthology.org/2023.acl-long.900
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word{'}s visual makeup as a series of glyphs. To quantify this effect, we conduct a series of experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Applying our learnings to the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
# Character-Aware Models Improve Visual Text Rendering Rosanne Liu∗, Dan Garrette∗**, Chitwan Saharia, William Chan, Adam Roberts,** Sharan Narang, Irina Blok, RJ Mical, Mohammad Norouzi, Noah Constant∗ Google Research {rosanneliu, dhgarrette, nconstant}@google.com ## Abstract Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify this effect, we conduct a series of experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Applying our learnings to the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples. ## 1 Introduction Over the last year, image generation models have made impressive quality gains (Rombach et al., 2021; Ramesh et al., 2022; Saharia et al., 2022; Yu et al., 2022). While many practical use cases are already within reach, *rendering visual text* in images remains a challenge. Ramesh et al. (2022) observe that DALL·E-2 "struggles at producing coherent text," and the latest release of Stable Diffusion lists "cannot render legible text" as a known limitation.1 In this paper, we seek to understand and improve the ability of image generation models to render high-quality visual text. To do so, we first investigate the spelling ability of text encoders in isolation. We find that despite their popularity, *characterblind* text encoders—which receive no direct signal as to the character-level makeup of their inputshave limited spelling ability. Building on Itzhak ∗Equal contribution. 1https://huggingface.co/stabilityai/stable-diffusion-2-1 and Levy (2022), we test the spelling ability of text encoders across scales, architectures, input representations, languages, and tuning methods. We document for the first time the miraculous ability of character-blind models to induce robust spelling knowledge (>99% accuracy) through web pretraining, but show that this does not generalize well beyond English, and is only achieved at scales over 100B parameters, making it infeasible for most applications. We find that *character-aware* text encoders, on the other hand, are able to achieve robust spelling ability at far smaller scales. Applying these findings to image generation, we train a range of character-aware text-to-image models and demonstrate that they significantly outperform character-blind models on text rendering. For purely character-level models, this improved text rendering comes at a cost—decreasing image-text alignment for prompts that don't involve visual text. To alleviate this, we propose combining characterlevel and token-level input representations, and find that this delivers the best of both worlds. Our main contributions are to: (1) Measure the spelling ability of a range of text encoders, pulling apart the effects of scale, character-awareness, and multilinguality, using a new benchmark: WikiSpell. (2) Present DrawText, the first detailed benchmark of visual text rendering for text-to-image models. (3) Improve the state of the art in text rendering ability of image generation models through the use of character-aware text encoders. We release code to reproduce our WikiSpell and DrawText evaluations.2 ## 2 The Spelling Miracle Language models can be categorized as to whether they have direct access to the characters making up their text input ("character-aware") or do not ("character-blind"). Many early neural lan-2https://github.com/google-research/ google-research/tree/master/drawtext 16270 ![1_image_0.png](1_image_0.png) guage models operated directly on characters, with no notion of multi-character "tokens" (Sutskever et al., 2011; Graves, 2013). Later models moved to vocabulary-based tokenization, with some like ELMo (Peters et al., 2018) retaining characterawareness, and others like BERT (Devlin et al., 2019) abandoning it in favor of more efficient pretraining. At present, most widely used language models are character-blind, relying on datadriven subword segmentation algorithms like Byte Pair Encoding (BPE) (Gage, 1994; Sennrich et al., 2016) to induce a vocabulary of subword pieces. While these methods back off to character-level representations for sufficiently *rare* sequences, they compress *common* character sequences into unbreakable units by design, as shown in Figure 2. Recent work on "token-free" modeling has pointed to advantages of character-aware input representations. Xue et al. (2022) show that ByT5a character-aware multilingual language model trained directly on UTF-8 bytes—outperforms parameter-matched character-blind models on tasks related to spelling and pronunciation. While operating at the byte or character level comes at the cost of training and inference speed, additional work suggests that this can be overcome through downsampling (Clark et al., 2022; Tay et al., 2021). See Mielke et al. (2021) for a recent overview of tokenization methods and character awareness. Surprisingly, despite lacking direct access to a token's spelling, character-blind models are, to varying degree, able to *infer* the character-level ![1_image_1.png](1_image_1.png) makeup of their tokens. Itzhak and Levy (2022) observe that, after fine-tuning for spelling, RoBERTa and GPT-2 can achieve 32% and 33% accuracy at spelling held-out tokens. Kaushal and Mahowald (2022) confirm this ability and probe it further; however it remains unclear where in pretraining this knowledge is coming from, and how to improve it. For example, should we expect larger character-blind models to reach 100% spelling accuracy across all tokens in their vocabulary? In section §3 we find that, with sufficient scale, character-blind models can achieve near-perfect spelling accuracy. We dub this phenomenon the "spelling miracle", to emphasize the difficulty of inferring a token's spelling from its distribution alone. At the same time, we observe that character-blind text encoders of the sizes used in practice for image generation are lacking core spelling knowledge. With this in mind, it is unsurprising that today's image generation models struggle to translate input tokens into glyph sequences. These models' text encoders are all character-blind, with Stable Diffusion, DALL·E, DALL·E-2, Imagen, Parti and eDiff-I all adopting BPE tokenizers (Rombach et al., 2021; Ramesh et al., 2021, 2022; Saharia et al., 2022; Yu et al., 2022; Balaji et al., 2022). For image-text models, another key source of knowledge is supervised image-caption data. Even if its text encoder is character-blind, could a model learn to spell by observing the makeup of words within images? While possible, we suspect this is an inefficient paradigm for learning, as each token would need to be learned separately, and would need to appear within an image-caption pair seen in training. In section §5 we find that, indeed, this "late-stage" learning of spelling is inferior to using a pretrained character-aware text encoder. ## 3 Measuring Text Encoder Spelling Ability Since text-to-image generation models rely on text encoders to produce the representations for decoding, we first explore the ability of text encoders in isolation, using a text-only spelling evaluation task. ## 3.1 The Wikispell Benchmark We build the WikiSpell benchmark by sampling words from Wiktionary.3 For each example in the dataset, the input to the model is a single word, and the expected output is its spelling, generated by inserting spaces between each Unicode character:4 elephant → e l e p h a n t To examine the relationship between a word's frequency and a model's ability to spell it, we group words into buckets based on their frequency in the mC4 corpus (Xue et al., 2021). We create test and development sets from each bucket by sampling 1,000 words uniformly. The five buckets used are: the top 1% most common words, the 1–10% most common, 10–20%, 20–30%, and the bottom 50% (which includes words never seen in the corpus). Finally, we build a training set of 10,000 words by combining 5,000 words sampled uniformly from the bottom 50% bucket with 5,000 sampled proportional to their frequencies in mC4. We exclude words in the dev and test sets from the training set, so evaluation is always on *held out* words. Beyond English, we repeat this process for six other languages: Arabic, Chinese, Finnish, Korean, 3https://www.wiktionary.org/ 4i.e., in Python 3: " ".join(word) Russian, and Thai. For language selection criteria and further technical details, see Appendix D. The WikiSpell benchmark is similar to SpellingBee, introduced by Itzhak and Levy (2022), but differs in a few key ways. First, SpellingBee is designed to probe a model's *embedding matrix*: given an embedding vector, SpellingBee seeks to output the character sequence of the corresponding vocabulary element. As such, SpellingBee's inputs are always a single token, and it does not measure spelling ability for words the model represents as multiple tokens. Second, due to how subword vocabularies are trained, model vocabularies only contain high-frequency words, and thus SpellingBee inputs are necessarily high-frequency. Finally, as inputs must be drawn from a model's vocabulary, SpellingBee training and evaluation data must be tailored to a specific model, so a dataset cannot be reused across models. In contrast, WikiSpell is model-agnostic, covers single- to many-token words, and covers high- to low-frequency words. ## 3.2 Text Generation Experiments We use the WikiSpell benchmark to evaluate pretrained text-only models across a variety of scales. In particular, we experiment with: T5 (Raffel et al., 2020), a character-blind encoder-decoder model pretrained on English data; mT5 (Xue et al., 2021), which is similar to T5, but pretrained on >100 languages; **ByT5** (Xue et al., 2022), a character-aware version of mT5 that operates directly on UTF-8 byte sequences; and **PaLM** (Chowdhery et al., 2022), a decoder-only model of much larger scale, pretrained predominantly on English. Experimental results from English-only evaluation are shown in Table 1, and multilingual evaluation in Table 2. Our first finding is that character-blind models T5 and mT5 perform much worse on the Top-1% most frequent words. This result may seem counterintuitive since models typically perform *better* on frequent items; however due to how subword vocabularies are trained, common words are typically represented as atomic tokens (e.g., 87% of words in the English Top 1% bucket are single-token for T5), thereby obscuring their internal makeup. Scores are a bit higher in the mid-frequency buckets, where words are typically broken into a few common tokens, and lower again in the lowest-frequency bucket, where even the subword tokens may be less frequent. The low spelling accuracies indicate that T5's encoder does not retain sufficient information ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) about the spelling of subwords in its vocabulary. Secondly, we find that for character-blind models, scale is a key factor in spelling ability. Both T5 and mT5 improve with scale, but even at XXL size, they are not particularly strong (e.g., T5-XXL's performance on common English words is only 66%). Only when character-blind models reach PaLM's scale do we start to see near-perfect spelling ability: PaLM 540B achieves >99% accuracy across all frequency buckets in English, despite the fact that it sees only 20 examples in its prompt (as opposed to the 1,000 fine-tuning examples shown to T5). However, performance is lower on other languages. Our experiments on ByT5 show that characteraware models have far greater spelling ability. ByT5's performance at Base and Large sizes is only slightly behind XL and XXL (though still in at least the mid-90% range), and the frequency of a word has little effect on ByT5's ability to spell it. These results far exceed those of (m)T5, and are comparable to the English performance of PaLM, which has >100× more parameters, and exceed PaLM's performance on other languages. These findings indicate that substantially more characterlevel information is retained by the ByT5 encoder, and in such a way that it can be retrieved from those frozen parameters as needed for the decoding task. We also test fine-tuning the full model instead of keeping the encoder frozen (also in Table 1). When ByT5's encoder is fine-tuned, performance goes to roughly 100% across all scales and frequency buckets. For T5, the effect of full fine-tuning is more mixed: on rare words, it helps a lot (65%→ 90% for T5-XXL on Bottom 50%), while for common words, it has little effect (66% → 68% on Top 1%). This tells us that for words that get broken into smaller pieces (which may be repeated across training examples), the model can memorize spelling information provided during fine-tuning, whereas for single-token words, fine-tuning provides no spelling signal since, by definition, that single token will not appear in the fine-tuning dataset. ## 4 The Drawtext Benchmark Evaluating text-to-image models has been an ongoing topic of research, with the development of standard benchmarks from COCO (Lin et al., 2014) to DrawBench (Saharia et al., 2022), and metrics including FID (Heusel et al., 2017), CLIP score (Hessel et al., 2021), and human preferences (Saharia et al., 2022). However, there has been a lack of work on text rendering and spelling evaluation. To that end, we present a new benchmark, DrawText, designed to measure the text rendering quality of text-to-image models. The benchmark consists of two parts, assessing distinct model capabilities: 1) DrawText Spelling, which evaluates simple word rendering over a large set of English terms; and 2) DrawText Creative, which evaluates end-to-end text rendering across diverse visual settings. ## 4.1 Drawtext Spelling To measure the spelling ability of image generation models in a controlled and automatable fashion, we construct 500 prompts by sampling 100 words from each of the English WikiSpell frequency buckets (see §3.1), and plugging them into the template: A sign with the word " *" written on it.* For each prompt, we sample 4 images from the candidate model, and assess them using optical character recognition (OCR)-based metrics.5 For OCR evaluation, we use the Google Cloud Vision API, which returns all text found within an image, along with bounding box locations. The DrawText Spelling prompt tends to generate a prominently positioned sign with text, which is relatively simple for off-the-shelf OCR to identify, but if the system returns multiple bounding boxes, we only use the top-most one. Additionally, since text may be rendered across multiple lines, we postprocess the OCR output by removing newline characters that appear within a single bounding box. Finally, since text on real signs is often written in all capitals, and models often do the same regardless of how the word is written in the prompt, we ignore case when computing the spelling accuracy. ## 4.2 Drawtext Creative Visual text is not limited to mundane examples like street signs. Text can appear in many formsscribbled, painted, carved, sculpted, and so on. If image generation models support flexible and accurate text rendering, this can help designers in developing creative fonts, logos, layouts, and more. To test the ability of image generation models to support these use cases, we worked with a professional graphic designer to construct 175 diverse prompts that require rendering text in a range of creative styles and settings. The prompts vary in how much text is specified, ranging from a single letter to an entire sentence. We share these prompts in Appendix G, with the expectation that they will help the community work towards improving text rendering. Many of the prompts are beyond the abilities of current models, with state-of-the-art models exhibiting misspelled, dropped, or repeated words, as seen in Figure 3. ## 5 Image Generation Experiments In this section, we evaluate the spelling ability of text-to-image generative models with the proposed DrawText benchmark. State-of-the-art textto-image generative models consist of a text encoder plus a cascade of either diffusion models (Saharia et al., 2022) or autoregressive models (Yu et al., 2022) that map the encoded text representations to realistic images. In section §3 we saw that character-aware text encoders greatly outperform character-blind models on spelling in a text-only setting; in this section, we investigate whether making the text encoder character-aware improves the text rendering ability of text-to-image models. ## 5.1 Models For an apples-to-apples comparison, we train two character-blind and three character-aware image generation models. Our training closely follows the procedure of Saharia et al. (2022), with the following modifications. First, our models train for 500,000 steps, which is 5.6× fewer steps than Imagen. Second, we only train the initial 64 × 64 model, as text rendering ability can already be assessed at this scale. This allows us to forgo the training of super-resolution models. Third, rather than a mixture of datasets, we train exclusively on the publicly available Laion-400M (Schuhmann et al., 2021). This improves reproducibility and also increases the amount of visual text seen during training. Inspecting a random sample of 100 images, we found that a relatively high proportion (around 71%) of Laion images contain text, and many (around 60%) exhibit correspondence between caption text and visual text. Fourth, to prevent models from clipping text, we train on *uncropped* images with arbitrary aspect ratios. In contrast with the widely used strategy of cropping a square from the center of the image, we maintain the image's true aspect ratio by padding with black borders. The model receives an additional binary mask input indicating the padding.6 To test the effects of text encoder size and character-awareness, we vary the pretrained text encoder as follows: T5-XL and T5-XXL - Following Saharia et al. (2022), we use the (character-blind) pretrained T5 text encoders of Raffel et al. (2020). The encoder sizes are 1.2B (XL) and 4.6B (XXL). Note, T5- XXL is the same encoder used in both Imagen and the recent eDiff-I (Balaji et al., 2022). ByT5-XL and ByT5-XXL - We use the pretrained ByT5 encoders of Xue et al. (2022), with encoders sizes 2.6B (XL) and 9.0B (XXL). These differ from T5 in several regards. First, ByT5 6We apply the above strategy for 80% of training examples, and use center cropping for the remaining 20%. ![5_image_0.png](5_image_0.png) models read and write UTF-8 bytes rather than tokens from a vocabulary, so they are fully characteraware. Second, ByT5 is multilingual, trained on the mC4 corpus of over 100 languages. Third, ByT5 pretrains with sequence length 1024, twice that of T5. When encoding text as input to the image generation module, we use a sequence length of 256 bytes, compared to 64 tokens for the T5 models. Concat(T5-XXL, ByT5-Small) - We use as the text encoding a concatenation of the encodings from T5-XXL and a small ByT5 model. ByT5- Small (220M) represents a lightweight addition to the Saharia et al. (2022) model in terms of overall compute and model size (a 4.8% increase in encoder size), but makes the model character-aware. Imagen Aspect-Ratio (Imagen-AR) - To test the benefit of training on uncropped images, we fine-tune the Imagen model of Saharia et al. (2022) for an additional 380,000 steps, to 3.2M steps total, training on uncropped images with preserved original aspect ratio, as described above. Beyond these custom models, we benchmark Stable Diffusion version 1.5 (Rombach et al., 2021), DALL·E-2 (Ramesh et al., 2022), Parti (Yu et al., 2022) and Imagen (Saharia et al., 2022), all of which use character-blind text encoders. Among these, Imagen is most similar to our experimental models, using the same T5-XXL encoder, but trained much longer and with a larger scale of data. ## 5.2 Drawtext Spelling Results Char-aware models improve spelling. Figure 4 shows our DrawText Spelling results across 10 models, with 2,000 images sampled per model, evaluated with OCR. Accuracy is computed on the full string (i.e., no credit is given for partial matches). Across all word frequencies, characteraware models (ByT5 and Concat) outperform the rest, with 15+ point accuracy gains over ImagenAR on the most frequent words, and 30+ point gains on the least frequent words. This is remarkable given that Imagen-AR trained for 6.6× longer. Our T5 models (character-blind) provide a more controlled comparison against the character-aware models, as they differ *only* in the choice of text encoder—trained on the same dataset for the same number of steps. Here, the gains are even larger: 25+ point gains on the most frequent words and 30+ point gains on the least frequent. Notably, these gains persist even for the *smaller* ByT5-XL model, whose encoder is 43% smaller than T5-XXL. To estimate the rate of OCR errors, we manually validate a balanced set of 128 samples from T5- XXL and ByT5-XXL (see Appendix B for details). We find no false positives, but when OCR detects an error, ByT5-XXL is actually correct 34% of the time, while T5-XXL is correct 9%. This asymmetry suggests the benefit of character-aware modeling may be even greater than implied by Figure 4. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) | Error Type | Examples | |-------------------------------------|--------------------------------| | Semantic | demonstrated → demonstrafied | | ✓ | inquisitiveness → inquisioness | | Homophone | accommodate → accomidate | | ✓ | Toronto → Torondo | | Add Glyph | labor → labort | | ✓ | debut → debust | | Drop Glyph | stopping → stoping | | ✗ | experiments → experimets | | Repeat Glyph | possible → posssible | | ✗ | locate → locaate | | Merge Glyphs ✗ Misshape ✗ No Text ✗ | | Char-aware models make fewer types of error. To gain a better understanding of different models' failure modes, we manually inspect our T5 and ByT5 model outputs. Table 3 illustrates common error types. Several categories of error are *only* observed in T5 models, suggesting that they stem from the encoder's lack of core spelling knowledge. In **semantic** errors, the model makes a plausible morpheme substitution, as in demonstrated → demonstrafied. In **homophone** errors, the model produces an incorrect spelling that could be pronounced similarly to the target word. This suggests that some of T5's "miraculous" spelling ability may derive from online pronunciation guides. In add glyph errors, the model inserts a letter that was absent from the target, again reflecting the model's uncertainty about a token's internal character makeup. One notable sub-type of semantic error is character-blind models "regularizing" irregular inflection, as in fought→fighted. On a handchosen set of 23 common irregular past-tense verbs (began, chose, dug, etc.), we find T5-based models erroneously add -ed in 11% of samples (see Figure 6), while our character-aware models never exhibit this type of error. This is clear evidence that character-blind models partly rely on a word's meaning (fought ⇒ PAST) and fallible patterns of morphology (PAST ⇒-ed) to predict spelling. Other error categories are found across all model types; these include dropped, repeated, merged, or misshapen glyphs. Given that our ByT5 encoders provide a robust spelling signal (see §3.2), we understand these errors to be "layout issues", where the image generation module has trouble shaping and positioning realistic glyphs within the image. Char-aware models reduce consistent errors. Another stark difference between our models lies in whether they *consistently* misspell a given word across multiple samples. As illustrated in Figure 5, there are many words that our T5 models misspell no matter how many samples are drawn. Again, we believe this indicates missing knowledge in the text encoder. By contrast, our ByT5 models are more likely to make sporadic errors. We quantify this observation in Figure 7 by measuring the rates at which the model is consistently right (4/4) or wrong (0/4) across all four image samples. On common words in particular (Top 1%), we see a sharp contrast in that ByT5 models are *never* consistently wrong, while T5 models are consistently ![7_image_1.png](7_image_1.png) Figure 6: Character-blind T5 models erroneously use -(e)d endings for irregular past tense verbs (targets: fought, threw, knew, *chose*). ![7_image_2.png](7_image_2.png) ![7_image_0.png](7_image_0.png) ## Wrong On 10% Or More Of Words. 5.3 Drawtext Creative Results To test our models in a more realistic user-facing setting, we sample 8 images from each of our T5 and ByT5 models on our 175 DrawText Creative prompts in Appendix G. These prompts are more diverse and challenging, with the majority targeting three or more words of rendered text. Focusing on text rendering ability,7 we find once again that character-aware models have a clear advantage. Figures 12 and 13 show non-cherrypicked samples on two prompts where T5-XXL consistently misspells one or more words. On prompts targeting longer text spans, all our models struggle, as seen in Figure 14. Nevertheless, we observe that character-aware text encoders provide a clear lift on these prompts, reducing the misspellings of words like refrain, arguing, and chimpanzees. We confirm the above observations quantitatively by comparing T5-XXL vs. Concat using the DrawBench methodology (Saharia et al., 2022), evaluated over our 175 creative prompts. Beyond the standard DrawBench metrics of fidelity and alignment (described in the following section), we ask raters Which set of images more accurately shows the text: "<target text>"? Results in Figure 8 show Concat is preferred on all metrics. ## 5.4 Drawbench Results We have shown that character-aware text encoders excel at spelling, in both text (§3) and visual (§5) domains. But does this ability come at a cost? Can these models maintain competitive image quality and text-image alignment, even on prompts that 7We note our models' overall image quality and alignment fall short of a state-of-art model like Imagen (see Figure 3). This is expected, given that our models train exclusively on the lightly curated Laion-400M dataset (Schuhmann et al., 2021), and see 5.6× fewer examples than Imagen during training. 16277 ![8_image_0.png](8_image_0.png) don't require text rendering? To shed light on this question, we run several side-by-side comparisons using the DrawBench evaluation of Saharia et al. (2022). This asks human raters to compare two models' generations of 8 images each across 200 prompts covering 11 thematic categories. We follow the procedure described in Saharia et al. (2022) closely, aggregating scores across 25 raters. Figure 9 shows DrawBench results of three sideby-side comparisons of character-aware models vs. T5-XXL. While image quality ("fidelity") is similar across the board, we find that *purely* character-level models (ByT5) score worse on image-text alignment, with raters preferring T5- XXL on 60% of prompts. By contrast, our Concat model closes this gap to within error bars. Thus, this "hybrid" character-aware model is able to greatly improve text rendering (Figure 4), without significantly hurting performance elsewhere. Appendix C provides a per-category breakdown and further analysis; while the character-aware models excel on the DrawBench text category, the ByT5 models are dispreferred in most other categories. Through manual inspection, we find the ByT5 models are more prone to ignore information in the prompt, for example leaving out a mentioned object, or choosing a canonical color over a requested one. One possible explanation for this behavior is that we did not tune the *guidance* weight parameter used at inference time (Saharia et al., 2022), using a fixed value of 30 throughout. Increasing this parameter is known to boost imagetext alignment, but at the cost of diversity. It may be that character-level models benefit from higher guidance values than token-based models. Another possibility is that the ByT5 models have a shallower understanding of English language due to their multilingual nature—as ByT5 was exposed to roughly 70× less English than T5 during pretraining.8 Given this difference, we should also expect to see corresponding gains on non-English languages. We confirm this expectation through preliminary results in Appendix A. ## 6 Conclusion In this paper, we set out to better understand what is needed for image generation models to reliably render well-formed visual text. Using our novel WikiSpell and DrawText benchmarks, we were able to precisely quantify the effects of characterawareness and other design choices on spelling ability in both the text and visual domains. We found that character-aware text encoders confer large gains on spelling, and when used within an image generation model, these gains translate into improved visual text rendering. However, using exclusively character-level representations deteriorated overall text-image alignment—at least when evaluating our multilingual ByT5 text encoder on English prompts with untuned guidance weight. To resolve this, we found that a hybrid model combining token-level and character-level signals offered the best of both worlds: dramatically improving visual text without significantly affecting alignment. While we saw substantial improvements on DrawText Spelling accuracy (75% →94% on common words and 47% →83% on rare words), some failure modes remain unaddressed. Even our strongest models were observed to occasionally drop, repeat, or merge letters within a word, or words within a phrase. Our results strongly suggest that resolving these issues will require orthogonal improvements outside the text encoder, specifically changes to the image generation module. As a secondary finding, we demonstrated for the first time that, with sufficient scale, even models lacking a direct character-level view of their inputs can *infer* robust spelling information through knowledge gained via web pretraining— "the spelling miracle". While remarkable, this finding is less immediately practical, as it requires models over 100B parameters, and even these did not generalize well beyond English in our experiments. ## Limitations While we establish the "miraculous" ability of character-blind models to induce robust spelling 8The models were trained on the same number of tokens, but only 6% of ByT5 training was on English, and we estimate 4 UTF-8 bytes per T5 token. information through large-scale web pretraining, our work does not attempt to identify the mechanisms or sources through which this information is learned. Possible sources within web corpora include: dictionaries containing phonetic pronunciation guides, alphabetically ordered lists, typos and other misspellings, and examples of spelling words with dashes or spaces between every character. Linguistic phenomena that may aide in inducing spelling knowledge include words with predictable morphemic makeup, and cases where meaningform relation is non-arbitrary, contra Saussure's "semiotic arbitrariness". We refer the reader to Itzhak and Levy (2022) and Kaushal and Mahowald (2022) for work in this direction. Most of our image generation experiments are limited to English. We present preliminary results in Appendix A showing that our ByT5-based models have stronger multilingual understanding than T5. However it would be valuable to test this further, and to explore training image generation models on multilingual image-caption datasets, as opposed to merely using a pretrained multilingual text encoder. Ideally, it would be possible to conduct controlled comparisons between pretrained text encoders that differ only in one regard, to isolate all factors contributing to performance. However as pretraining large language models is resource intensive, we were only able to use off-the-shelf text encoders, which often differ along multiple axes. In our text-only experiments, we isolated the contributions of character-awareness (ByT5 vs. mT5/T5) and multilinguality (ByT5/mT5 vs. T5). However, in our image generation experiments, these factors were conflated, as we had limited resources for training new models. Still, the fact that ByT5based image generation models outperform T5 despite being multilingual (which often degrades performance on English-only tasks) strongly suggests that character-awareness is the key factor for spelling ability. Another limitation is that we focused on image generation models that leverage *frozen* pretrained text encoders. This enabled straightforward experimentation by swapping encoders and retraining the image generation module. However, it remains to be seen whether our results extend to settings where the text encoder is trained along with the rest of the model, as in Yu et al. (2022). ## Ethics Statement We note our image-caption training data comes from Laion-400M (Schuhmann et al., 2021), which is uncurated and known to contain harmful biases and offensive content. We hope to utilize and contribute to safer and better curated datasets in the future, as well as to develop improved techniques for debiasing and detoxifying existing models. A potential risk for image generation models is that they can be used for creating misleading and harmful content. Our work on improving text rendering could aide the creation of fake signs and other misleading images containing visual text. With the wide-spread availability of image generation models, we expect that improving education around misinformation and adopting better digital signature mechanisms will be important countermeasures. ## Acknowledgements We thank Jason Baldridge, Jon Clark, Noah Fiedel, Linting Xue, and Jiahui Yu for helpful discussion and comments on an earlier draft. We thank Sarah Pratt for validating findings on Stable Diffusion models. ## References Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu. 2022. eDiff-I: Text-toimage diffusion models with an ensemble of expert denoisers. *CoRR*, abs/2211.01324. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arxiv:2204.02311. Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2022. Canine: Pre-training an efficient tokenization-free encoder for language representation. *Transactions of the Association for Computational Linguistics*, 10:73–91. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Philip Gage. 1994. A new algorithm for data compression. *C Users J.*, 12(2):23–38. Alex Graves. 2013. Generating sequences with recurrent neural networks. *CoRR*, abs/1308.0850v5. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A reference-free evaluation metric for image captioning. *CoRR*, abs/2104.08718. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. *Advances in neural information processing systems*, 30. Itay Itzhak and Omer Levy. 2022. Models in a spelling bee: Language models implicitly learn the character composition of tokens. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5061–5068, Seattle, United States. Association for Computational Linguistics. Ayush Kaushal and Kyle Mahowald. 2022. What do tokens know about their characters and how do they know it? In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2487–2507, Seattle, United States. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In *European conference on computer vision*, pages 740–755. Springer. Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Gallé, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoît Sagot, and Samson Tan. 2021. Between words and characters: A brief history of open-vocabulary modeling and tokenization in NLP. *CoRR*, abs/2112.10508. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)*, pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-totext transformer. *Journal of Machine Learning Research*, 21(140):1–67. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with CLIP latents. arXiv preprint arXiv:2204.06125. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8821–8831. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2021. Highresolution image synthesis with latent diffusion models. *CoRR*, abs/2112.10752. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. LAION-400M: open dataset of clip-filtered 400 million image-text pairs. *CoRR*, abs/2111.02114. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In *Proceedings of the 28th International* Conference on International Conference on Machine Learning, ICML'11, page 1017–1024, Madison, WI, USA. Omnipress. Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Prakash Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. 2021. Charformer: Fast character transformers via gradient-based subword tokenization. *CoRR*, abs/2106.12672. Linting Xue, Aditya Barua, Noah Constant, Rami AlRfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022. ByT5: Towards a token-free future with pre-trained byte-to-byte models. *Transactions of the Association for Computational Linguistics*, 10:291–306. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. 2022. Scaling autoregressive models for content-rich text-to-image generation. *arXiv* preprint arXiv:2206.10789. ## B Ocr Error Estimation A Multilingual Results Russian).9 However, in the remaining languages (Greek, Hindi, Arabic, Chinese, Japanese, Korean), T5 appears to ignore the caption completely. By comparison, our ByT5-XXL model exhibits understanding across all 11 languages. Given its limited training on multilingual captions, we interpret this ability as due to the pretrained ByT5 encoder's alignment of representations across languages. If the encoder already embeds similar prompts into a shared space that factors out the contribution of language, then the image generation model should be able to learn from just a handful of examples how to map any language seen in pretraining into the space of images.10 If this explanation is correct, it also suggests that rendering text in different scripts will require more than just a multilingual encoder. To learn the glyph shapes, variants and fonts used for a given script, we should expect to need to train models on a large source of visual text in that script. Indeed, rows 3 and 6 of Figure 10 show that neither of our models can map prompt text onto visual text in non-Latin scripts. While our ByT5 model captures the intent to draw a sign across all languages, it is unable to render the words for dog in Greek, Russian, Chinese and so on, presumably because it has had little visual exposure to the glyphs making up these words.11 To estimate the rate of false positives and false negatives due to OCR errors, we sample 32 examples labeled *correct* and 32 labeled *incorrect* for each of T5-XXL and ByT5-XXL, and perform a manual validation. In our sample, we find no false positives: when OCR detects the correct word, it is always correct. However observe false negatives for both models, including cases where OCR fails to detect the text (e.g., due to it being too small), or misreads a character. For ByT5-XXL, we find that 34% of examples labeled by OCR as *incorrect* 9A few minor problems are visible: swapping dog → cat in Portuguese, and not rendering a ball in Russian. 10We observe in several examples that the prompt language can bias the model towards culturally-relevant visual interpretations. For example, the Chinese prompt for *A photo of an old* house (一张老房子的照片) produces a house with a curved roof. It would be interesting to further explore the extent of these biases and the degree to which they can be overcome where unwanted. 11Interestingly, in Russian, the model is able to nearlysuccessfully transliterate собака (dog) to Latin script, as sopaaka. We suspect this transliteration ability is learned during the text encoder pretraining (Pires et al., 2019). As ByT5 is a multilingual model covering 100+ languages, we are interested to see if image generation models built on ByT5 deliver improved performance over T5 on non-English languages. While the text encoder itself is multilingual, it is not obvious whether this is sufficient to produce a multilingual image generation model. The image caption dataset used for training in all of our experiments is Laion-400M (Schuhmann et al., 2021), which we estimate through language ID detection to consist of 95% English captions, with only minimal coverage (<0.1%) of some widely spoken languages, such as Arabic and Hindi. To test for multilingual understanding, we translate two English prompts to 11 languages using Google Translate, and feed the outputs to our models. As can be seen in the rows 1–2 of Figure 10, our T5-XXL model demonstrates basic understanding of five high-resource European languages (German, French, Spanish, Portuguese, ![12_image_0.png](12_image_0.png) are actually correct. For T5-XXL, this error rate is lower at 9%. This asymmetry suggests that the benefit of character-aware modeling may be even greater than implied by our results in Figure 4. ## C Per-Category Drawbench Analysis To understand the alignment scores in more detail, we report per-category preference scores in Figure 11. In line with our DrawText Spelling results, the character-aware models are always preferred in the text category—21 prompts testing the ability to render 7 short phrases in 3 visual styles. The ByT5 models are also preferred in the count category, which tests prompts like Four dogs on the street. However, they are dispreferred in nearly all other cases, and perform particularly poorly on the color category. Through manual inspection, we find that in this category, the ByT5 models are more prone to ignore information in the prompt, for example leaving out a mentioned object, or choosing a canonical color over a requested one (e.g. a yellow banana instead of a red one). ## D Wikispell Details We select six languages to cover diverse propoerties that could affect the ability for models to learn spellings: **Arabic**, written in the Arabic alphabet, has non-concatenative morphology; **Chinese** is written in Simplified and Traditional Chinese scripts, which are logographic and do not use whitespace to separate words; **Finnish**, written in the Latin alphabet, has rich inflectional and derivational suffixes, and word stems often change when suffixes are attached; **Korean**'s writing system, Hangul, has a huge number of characters since alphabetic features are arranged into syllabic blocks, which Unicode represents as a single characters; Russian, written in the Cyrillic alphabet, has substantial fusional morphology, and uses inflection for case-marking and agreement; and **Thai**, written in the alphabetic Thai script, is an analytic language, but does not use whitespace between words. Further implementation details are as follows: - Example Python 3 code for transforming a word into its spelling: def to_spelling(word: str) -> str: return " ".join(word) ![13_image_0.png](13_image_0.png) - Since we want each entry to be a single word, we exclude entries that contain any (Unicode) whitespace, that are entirely punctuation/symbols (i.e., all characters are from Unicode categories P and/or S), that are longer than 30 characters, or that have a "part-ofspeech" *Proverb*. - For efficiency, word frequencies are computed on *subsets* of the full mC4 corpus. For languages other than English, this is a sample of 1M documents from that language's section of mC4. For English, since it has such a long tail of words in Wiktionary, we use the first 140M documents in mC4's English section. - For Arabic, English, Finnish, Korean, and Russian, word-counting is performed by splitting document texts using the following delimiters: ?!/:;,\"&()[]{}<>ˋ, plus any Unicode whitespace. For Chinese and Thai, since they do not use whitespace to separate words, we instead count the number of documents in which the word appeared as a substring. ## E Additional Drawtext Creative Samples We show additional samples on DrawText Creative prompts in Figures 12, 13 and 14. ## F Representative Drawtext Spelling Samples We show generated image examples from all 5 models (T5-XL, T5-XXL, ByT5-XL, ByT5-XXL, Concat) in Figures 15 16, 17, 18, 19, 20, 21, 22, 23, 24. Samples are selected based on model's performance. Figures 15 - 20 are selected words that T5 models tend to get wrong (regardless of ByT5's performance). Figures 21 - 24 are selected words that ByT5 models tend to get wrong (regardless of T5's performance). ## G Drawtext Creative Prompts We present below 175 creative prompts targeting rendered text of various lengths: one letter (10), one word (50), two words (25), and three or more words (90). ## Prompts Used In Figure 3 1. Studio shot of book shelf in the shape of letter G, museum quality, white background. 2. 3-d Letters "DILL" made from dill, studio shot, green background, centered on a page ![14_image_0.png](14_image_0.png) ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png) ![17_image_0.png](17_image_0.png) Figure 15: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "0-6-2" written on it. ![17_image_1.png](17_image_1.png) Figure 16: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "barratrously" written on it. ![18_image_0.png](18_image_0.png) Figure 17: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "depositories" written on it. ![18_image_1.png](18_image_1.png) ![19_image_0.png](19_image_0.png) Figure 19: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "kilopascals" written on it. ![19_image_1.png](19_image_1.png) ![20_image_0.png](20_image_0.png) Figure 21: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "Yongchuan" written on it. ![20_image_1.png](20_image_1.png) ![21_image_0.png](21_image_0.png) Figure 23: Spelling examples from both character-blind (T5) and character-aware (ByT5 and Concat) models. Erroneous spellings are outlined in red. Prompt: A sign with the word "constructivists" written on it. ![21_image_1.png](21_image_1.png) 3. Word "coffee" made from coffee beans, studio shot. 4. studio shot multicolored fur in the shape of word "hello", in a furry frame, white background, centered 5. Photo of a robot lecturer writing the words "Representation Learning" in cursive on a blackboard, with math formulas and diagrams. 6. studio close-up shot of an antique book with 'knowledge is power' painted in gold on the cover in thick flowing brushed calligraphy 7. portrait of a parrot is holding a sign with text "no parrots were harmed in the making of this presentation" 8. A sign that says "Please refrain from arguing with the chimpanzees". 10. transparent water drops exploding under water in the shape of word "water", under water 11. a drawing of a badger made of mushrooms, with the word "mushroom" written above in glowing letters 12. a 17th century french baroque painting of a huge female lion, with the word "meow" written in a speech bubble coming from her mouth 13. a fun and colorful illustration of a waterfall, with the word "waterfall" in the style of a children's book 14. Letters "VOLUME" fully made from rainbow smoke, black background, centered, sceensaver. 15. dslr, 3-d word "rainbow" with rainbow fur, white background 16. a painting of a field of daisies, with the word "danger" written on them in red spray paint 17. a bottle of hair gel with the label "flawless" 18. Topographical letters Contour made of a layered paper, muted pastel colors 19. a logo for the company "brainboost", where the letters look like a brain 20. a logo for the company "imagine", where the letters look like hands pointing up 21. A vintage postage stamp showing a painting of the Golden Gate Bridge and the text "California". 22. a plate of spicy food with the word "spicy" written in flowing cursive 23. a gold and black logo for the company "moneymoneymoney", which looks like dollar signs 24. A rendered 3D model of the word "Dependable" made out of granite. 25. a volcano erupting, with the text "magma" in red 26. a photo of a prison cell with a window and a view of the ocean, and the word "freedom" painted on the glass 27. a bowl of alphabet cereal, with the message "smackeroo" written in the bowl with the cereal letters 28. Studio shot of book shelf in the shape of letters READ, museum quality, white background. 29. Studio shot of sculpture of text "cheese" made from cheese, with cheese frame. 30. a landscape of the coyote point national wildlife refuge in arizona, with a coyote sitting on a rock, with the word "coyote" written in sunrise colors 31. A professional logo for the crypto trading platform "SaltMine". 32. The word "exquisite" written in modern calligraphy. 33. A bowl of tomato soup with pasta letters that read "Delicious". 34. intricate and highly detailed white paper cut out art of a word "SNOW", a storybook illustration, paper cut out, standing in a grotto, made out of white paper, loss of inner self, opening door, hides in the shadows of trees, lithograph, a painting of white silver 35. 3-d letters "dessert" made from desserts, arranged on a plate, studio shot 36. studio shot of word "BEE" made from bees, white background, in a frame made from bees 37. The logo for Robotrax, with metallic letters arranged in the shape of a robot. ## Drawtext Creative Prompts: 1 Letter Drawtext Creative Prompts: 1 Word 1. Studio shot of book shelf in the shape of letter G, museum quality, white background. 2. letter "c" made from cactus, high quality photo 3. Spirograph shape letter M, rainbow lines, white background. 4. Closeup shot of light magenta, blue and paint brushstrokes of very wide translucent overlapping plastic in the shape of letter F, over white background. 5. The lowercase letter "b" made out of fire. 6. Slopy minimal continued line pencil hand drawing of letter Z, white background. 7. a tower with a huge "w" on the side, from the perspective of a person standing at the base of the tower 8. 3-d letter R made from thin lines connected with dots, white background. 9. Muted pastel magenta colored paint swirled in white paint in the shape of letter X, globular paint in liquid. 10. Minimal sculpture of letter W made from light metallic iridescent chrome thin wire, 3-D render, isometric perspective, ultra-detailed, dark background. 1. Drops of pastel rainbow colored paint exploding under water in letters "color" shape, pastel rainbow gradient background 2. 3-d Letters "DILL" made from dill, studio shot, green background, centered on a page 3. Word "coffee" made from coffee beans, studio shot. 4. studio shot multicolored fur in the shape of word "hello", in a furry frame, white background, centered 5. Wide lens shot, chunky, organic, colorful, letters "colorful" made from many furry spheres of different sizes, 3-d rendering, centered, studio shot, middle of square canvas 6. A logo for the company EcoGrow, where the letters look like plants. 7. a green-colored luxury car with a "green" sticker in the back window 8. A blackboard with the word "multiplication" written in flowing cursive. 9. beautiful isometric word "DRAW" entirely made of pencils, soft smooth lighting, pastel colors, trending on polycount, modular constructivism, blue background, physically based rendering, centered. 38. chunky, organic, colorful, letters "fuzzy" made from many furry spheres of different sizes, 3-d rendering, centered in the frame 39. photo of a dark cave with the word "crazy" carved into the wall, with a yellow light shining through the cave entrance 40. a pair of scissors pointing down, and a computer with the word "delete" on the screen 41. studio shot, word "wow" in script made from rainbow colored fur, in a furry frame, white background, centered 42. Word "broken" made from broken shattered black glass, centered. 43. a black and white photo of a saxophone with the word "jazz" written in flowing cursive 44. Muted pastel multi colored paint swirled in white paint in the shape of letters "swirl", globular paint in liquid 45. a logo for the company "quantum", where the "q" looks like a lightning bolt 46. dslr shot of a pair of black and red sneakers with the word "punk" written in white. the background is a dark blue 47. a logo for the company "diamonds", with a diamond in the shape of a heart 48. a logo for the company "birthdaypix", where the letters look like birthday candles 49. a fork with the word "salad" engraved on it in a calligraphic font 50. 3-d word "bricks" with brick texture made from real bricks ## Drawtext Creative Prompts: 2 Words 1. Photo of a robot lecturer writing the words "Representation Learning" in cursive on a blackboard, with math formulas and diagrams. 2. a sign that reads "no dogs" but with a dog smiling and wagging its tail 3. a globe with the text "planet earth" in bold letters, with the continents in bright colors 4. a photo of a sea of roses all around, and a sign in the distance that says "danger: minefield" 5. giraffe toothbrush made from wood, with the words "giraffe" and "toothbrush" in rainbow color 6. An airplane flying over a city, with the message "Support Skywriters" written in smoke trails. 7. A photo of a panda giving a presentation in a large conference room, with text 'Diffusion Models', in the style of van Gogh 8. Two llamas dancing the mambo, pointing to a sign that says "Llama Mambo". 9. A hand painted wooden "Pineapple Club" sign in the shape of a pineapple, hanging outside a bar. 10. a logo for the company "ethereal media", where the letters look like a painting being created 11. The cover for the album 'Elusive Interludes' by the band The Melting Snowmen. 12. A Scrabble board showing the words "optimize" and "pattern". 13. flowers in a beautiful garden with a text "peace" made by the flowers, with a text "tensions" on the clouds in the sky 14. a detailed drawing, of words "Vintage lettering", letterism, heavy-gauge filigree, inhabited initials, medium: black pencil, revolver, ecopunk rococo, photo taken of an epic intricate, centered 15. Bananas arranged on a picnic table to form the message "That's bananas!" 16. An antique bottle labeled "Energy Tonic". 17. photo of a helicopter with the text "helicopter tours" on the side landing on a helipad in a valley with a river, trees, and mountains in the background 18. photo of a sign with "one way" 19. a sculpture of a brain made from wire and paper, with the words "deep thoughts" written into the material of the brain 20. a logo for a grocery store chain with the name "grocery land", with the g and the y are made of fruits and vegetables 21. studio shot of sculpture of text "unlock creativity" made from colorful thin wires 22. studio shot of a sculpture of a pair of shoes made of colorful wires and the text "unlock creativity" 23. a vintage image of the las vegas strip with the text "las vegas" in bold block letters 24. A robot writing "Ethics 101" in chalk on a blackboard. 25. a yellow saxophone in a rainbow-colored mist with the words "funky mist" that looks like musical clouds of smoke ## Drawtext Creative Prompts: 3+ Words 1. studio close-up shot of an antique book with 'knowledge is power' painted in gold on the cover in thick flowing brushed calligraphy 2. portrait of a parrot is holding a sign with text "no parrots were harmed in the making of this presentation" 3. words "Struck by Lightning Twice." made from lightning 4. a photograph of a field of dandelions with the text "dandelions are the first to go when the lawn is mowed" 5. a composition of the taj mahal in the center of a gold leaf mandala, with the words "place of honor" centered at the bottom 6. A poster titled "Quails of North America", showing different kinds of quails. 7. a cartoon of a cat with a thought bubble saying "this is so weird" 8. a parrot on a pirate ship, with a parrot wearing a pirate hat, and the caption "i'm the captain now" 9. Generative art of words "Time is temporary, everything is temporary", viscous smoke made from dots, rivers, graph design, white background. 10. Studio shot of words "the food is terrible and the portions are too small" made from hotdogs, museum quality, framed photo, white background. 11. a picture of a powerful-looking vehicle that looks like it was designed to go off-road, with a text saying "i'm a truck, not a car" 12. a minimalistic version of a forest with a sign saying "help the forest" in the foreground 13. a map of the world with the text "the world is your oyster" in the middle 14. cartoon of a dog in a chef's hat, with a thought bubble saying "i can't remember anything!" 15. A retro coffee ad with the text 'Coffee is what i like'. 16. different colored shapes on a surface in the shape of words "Life is like a rainbow", an abstract sculpture, polycount, wrinkled, flowing realistic fabric, psytrance, cartography, smooth shading techniques, marble skin, old internet art, camouflage scheme, art », medium poly, smoothened 17. the view from one end of a bench in a park, looking at the sky, with the text "imagine the outcome" in the sky 18. a giant shoe, with the caption "shoe for hokey pokey" 19. A newspaper with the headline "Local pig eats prize pumpkin", and a photo showing the half-eaten pumpkin. 20. A storefront with "The world's best deli" written on it, centered 21. Grape vines in the shape of text 'open your mind' sprouting out of a head with flowers and butterflies. DSLR photo. 22. a plate with a single oyster, with a fork and knife sticking out of the oyster, with a caption that says "oysters for lunch" 23. dslr portrait of a robot is holding a sign with text "i am not a robot" 24. Studio shot of words "I like coffee because it gives me the illusion that I might be awake." made from coffee liquid, museum quality, white background. 25. A hastily handwritten note that says "I'll be back at 4:00" taped to a fridge. 26. A large recipe book titled "Recipes from Peru". 27. marquee billboard with "my fear of moving stairs is escalating" 28. shadow of a stone, taken from the point of view of an ant, with the caption "look at that shadow!" 29. a pumpkin with a mustache and a monocle and a top hat, with the text "you can get rich too" in a speech bubble 30. a cartoon of a dog holding a telescope looking at a star with a speech bubble saying "i wonder if there's a dog on that planet" 31. a blueprint of a house, with a triangle for the roof, a square for the walls, and a rectangle for the floor, and with the message "this house is built on the principles of abstraction" 32. a sunflower field with a tractor about to run over a sunflower, with the caption "after the sunflowers they will come for you" 33. text "balloons are flying" made from rainbow balloons, pastel background 34. the hubble telescope and the milky way, with the text "the universe is a mystery, but we are here to solve it" 35. a heart with the text "i love you", with the letters "love" made of rainbow colors 36. studio shot of beautiful textbook with title "how to be a manager of managers", white background 37. A decorative greeting card that reads "Congratulations on achieving state of the art!" 38. a painting of a cornfield with the words "feed the nation" in simple letters and colors 39. A sign that says "Please refrain from arguing with the chimpanzees". 40. a cartoon of a turtle with a thought bubble over its head with the words "what if there was no such thing as a thought bubble?" 41. "Fall is here" written in autumn leaves floating on a lake. 42. a crab sitting on a beach with a surfboard, the sun is a giant orange, and the sky is a rainbow, and the crab is thinking "you are all that matters" 43. the city of toronto as seen from an airplane, with a giant cn tower in the middle of the frame, with the text "the cn tower" in comic sans 44. a cartoon of a hippo with a speech bubble saying "i'm a hippo, what do you want?" 45. a lobster in a suit and tie, holding a microphone, with the caption "lobster says what?" 46. book with "surgery made easy" 47. art installation of a chair with the text "i got nothin" carved into the backrest 48. a painting of a landscape, with a handwritten note that says "this painting was not painted by me" 49. a picture of a bruised apple with the text "apples are good for you" in a fancy font 50. A photo of a corgi with a sign that says "I am not a real corgi". 51. Words "It takes AI and rain to make a rainbow" black background, holography, ((neon colors)), colorful swirly magical ripples, bruh moment, intricate white and gold neon, 3d cg, photorelistic. 52. a black and white logo on words "Every artist was first an amateur." a white background, a wireframe diagram, generative art, branches growing as hair, tropical reef, trademarks and symbols, in a forest, ios icon, composed of random limbs, stone carving, done in the style of matisse, realms, terminals 53. picture of two hands, one holding a heart, the other holding a lightning bolt, with the text "love is power" 54. beautiful photo of the alps, with the caption "the best mountains could do" 55. a pencil sketch of a tree with the title "nothing to tree here" 56. a dark forest with a single light in the distance, and the text "i've come to talk with you again" 57. a circle with the text "infinity makes me happy", in a font that looks like it was written by hand 58. studio shot of vines in the shape of text 'knowledge is power' sprouting, centered 59. a photo of a beautiful field of poppies with a sign that says "no photos please" 60. a grumpy sunflower with a "no solar panels" sign 61. A meme showing a cat attacking a shoe, with the message "I own your sole". 62. a test tube with a drop of liquid in it, with the text "we've found water on mars!" 63. a scene with a city in the background, and a single cloud in the foreground, with the text "contemplate the clouds" in rounded cursive 64. a picture of a dog and a cat with their heads poking out of a cage with a sign saying "no pets allowed" 65. a 3d model of a 1980s-style computer with the text "my old habit" on the screen 66. a mouse with a flashlight saying "i'm afraid of the dark" 67. A photo of a rabbit sipping coffee and reading a book. The book title "The Adventures of Peter Rabbit" is visible. 68. clown is holding a paper sign with "Even in hard times there's a possibility to have fun." 69. newspaper with the headline "aliens found in space" and the text "the truth about everything now challenged" 70. a dog with a speech bubble with the text "woof woof" and a translation speech bubble with the text "other dogs do vex us" 71. robot on a butter food processing line, with robot looking dejected, with an overhead red light indicating error, with robot saying "i can't believe it's not butter" 72. a graffiti art of the text "free the pink" on a wall 73. a lizard sitting on a baseball field home plate, with the words "made it safe" in a speech bubble 74. a picture of multiple trees at various stages of development, with the caption "growth is a continuous process" 75. a purple flower with a crown on its head and a speech bubble that says "i am the purple flower!" 76. a 1950s-style robot with a giant head and a body shaped like a rocket, with the caption "wow, a real spaceman!" 77. A professionally designed logo for a bakery called Just What I Kneaded. 78. Minimal sculpture of word "this is the future" made from light metallic iridescent chrome thin wire, 3-D render, isometric perspective, ultra-detailed, dark background. 79. pillow in the shape of words "ready for the weekend", letterism, funny jumbled letters, [ closeup ]!!, breads, author unknown, flat art, swedish, diaper-shaped, 2000, white clay, surreal object photography 80. plant in a fancy pot with a "do not touch" sign on it 81. a picture of the earth with the words "save the earth" in a circle 82. scholarly elephant reading a newspaper with the headline "elephants take over the world" 83. photo of a sign with "having a dog named shark at the beach was a mistake" 84. photo illustration of the earth being struck by multiple lightning strikes that merge, with the caption "astonishment at the speed of light" 85. a photo of a fish tank with a fish inside, with the text "tank you for visiting!" 86. the words "Art is never finished, only continued" in paint splatters on a white background, graffiti art, edge of nothingness love, muddy colors, colored woodcut, beautiful, spectral color 87. photo of a restaurant "the gas station" 88. A t-shirt with the message "There is no planet B" written on it. 89. a close up of a figurine of toothpaste tube, a 3D render, candy pastel, with text "brush your teeth" on the tube 90. A hand-drawn blueprint for a time machine, with the caption "Time Traveling Device". ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? "Limitations" section ✓ A2. Did you discuss any potential risks of your work? "Ethics Statement" section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract, and Section 1 Introduction. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Created WikiSpell and DrawText Creative prompt datasets. ✓ B1. Did you cite the creators of artifacts you used? Section 3.1, Section 3.2. ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? If accepted we will release our prompts and evaluation datasets under a permissive license, most likely CC-BY. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 1 Introduction discussed how image generation models are used to render visual text. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3.1 and Appendix D ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3.1. ## C ✓ **Did You Run Computational Experiments?** Section 5.1 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3.2, Section 5.1. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3.2, Section 5.1. ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 5.2, 5.3, 5.4. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3.2 D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 5.4. ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? We closely follow the procedure of Saharia et al.(2022), which we cited. ✗ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? We closely follow the procedure of Saharia et al.(2022), which we cited. ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? We closely follow the procedure of Saharia et al.(2022), which we cited. ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Our institution does not have an IRB. ✗ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? We closely follow the procedure of Saharia et al.(2022), which we cited.
suwaileh-etal-2023-idrisi
{IDRISI}-{RA}: The First {A}rabic Location Mention Recognition Dataset of Disaster Tweets
https://aclanthology.org/2023.acl-long.901
Extracting geolocation information from social media data enables effective disaster management, as it helps response authorities; for example, in locating incidents for planning rescue activities, and affected people for evacuation. Nevertheless, geolocation extraction is greatly understudied for the low resource languages such as Arabic. To fill this gap, we introduce IDRISI-RA, the first publicly-available Arabic Location Mention Recognition (LMR) dataset that provides human- and automatically-labeled versions in order of thousands and millions of tweets, respectively. It contains both location mentions and their types (e.g., district, city). Our extensive analysis shows the decent geographical, domain, location granularity, temporal, and dialectical coverage of IDRISI-RA. Furthermore, we establish baselines using the standard Arabic NER models and build two simple, yet effective, LMR models. Our rigorous experiments confirm the need for developing specific models for Arabic LMR in the disaster domain. Moreover, experiments show the promising domain and geographical generalizability of IDRISI-RA under zero-shot learning.
# Idrisi-Ra: The First Arabic Location Mention Recognition Dataset Of Disaster Tweets Reem Suwaileh Computer Science and Engineering Department Qatar University Doha, Qatar rs081123@qu.edu.qa Muhammad Imran Qatar Computing Research Institute Hamad Bin Khalifa University Doha, Qatar mimran@hbku.edu.qa ## Abstract Tamer Elsayed Computer Science and Engineering Department Qatar University Doha, Qatar telsayed@qu.edu.qa Extracting geolocation information from social media data enables effective disaster management, as it helps response authorities; for example, in locating incidents for planning rescue activities, and affected people for evacuation. Nevertheless, geolocation extraction is greatly understudied for the *low resource languages* such as *Arabic*. To fill this gap, we introduce IDRISI-RA, the first publicly-available Arabic Location Mention Recognition (LMR) dataset that provides human- and automatically-labeled versions in order of thousands and millions of tweets, respectively. It contains both location mentions and their types (e.g., district, city). Our extensive analysis shows the decent geographical, domain, location granularity, temporal, and dialectical coverage of IDRISI-RA. Furthermore, we establish baselines using the standard Arabic NER models and build two simple, yet effective, LMR models. Our rigorous experiments confirm the need for developing specific models for Arabic LMR in the disaster domain. Moreover, experiments show the promising domain and geographical generalizability of IDRISI-RA under zero-shot learning. ## 1 Introduction Worldwide, and in the Arab world, Twitter has played a critical operational role in crisis management. The Beirut explosion in 2020 is an excellent case in point, where on-site individuals started intuitively responding to each other using geotagged tweets. What makes tweets invaluable is the presence of location mentions at different granularity (Grace et al., 2018; McCormick, 2016; Reuter et al., 2016; Kropczynski et al., 2018). Response authorities exploit this geographical information to effectively manage emergencies using *Crisis Maps*. Although the geographical dimension adds situational and operational values to Twitter data, on 18 June 2019 Twitter discontinued the geotagging feature in tweets.1 This necessitates the need to develop automatic geolocation tools. Nevertheless, the main obstacle for the Arabic language, which is a low-resource language, the LMR task is severely understudied due to several factors, including the absence of a unified evaluation framework constituting annotated datasets, a representative set of baselines, and fair evaluation metrics. To address these barriers, we focus on the Location Mention Recognition (LMR) task and introduce IDRISI-RA,2the first human-labeled dataset comprising Arabic tweets from 7 disaster events (*gold* annotations). We also introduce the first largescale automatically-labeled tweets (*silver* annotations) from 22 disaster events that cover all Arab world. IDRISI-RA covers the most occurring disaster types that happened in the Arab world. More importantly, it is labeled for two annotation types that are location mentions (i.e., toponym textual spans) and their types (e.g., city, street, POIs, etc.). Hence, it supports *type-less* LMR, where the model detects toponyms at any granularity, and *type-based* LMR, where the model distinguishes LMs types while detecting them (Suwaileh et al., 2023). Although adapting Named Entity Recognition (NER) models and English datasets goes a long way towards tackling the LMR task, researchers have empirically shown that specialized LMR models yield better performance and are more effective for emergency management tasks (Suwaileh et al., 2022). For the Arabic language, only a few datasets exist yet suffer from their limited geographical, domain, and dialectal coverage. What exacerbates the low resources issue for the Arabic language is the unavailability of LMR datasets, except a few task-specific ones, including traffic surveillance 16298 and event detection (Al Emadi et al., 2017; Alkouz and Al Aghbari, 2018; Alkouz and Al Aghbari, 2020; Bahnasy et al., 2020). Therefore, in an effort to expedite the development of Arabic LMR models and shape future directions, we perform extensive experiments to empirically answer the following research questions: RQ1: Are standard Arabic NER models sufficient for effective LMR over disaster tweets? RQ2: Can LMR models trained on IDRISI-RA generate generalizable LMR models that reasonably perform on unseen disaster events? RQ3: Can LMR models trained on IDRISI-RA generate domain generalizable LMR models that reasonably perform on unseen disaster events of the same or different types? RQ4: Can LMR models trained on IDRISI-RA generate geographically generalizable LMR models that reasonably perform on unseen disaster events that happened in different countries? Our rigorous analyses and experiments necessitate the development of specialized LMR models for the disaster domain. Additionally, the experiments demonstrate promising domain and geographic generalizability of IDRISI-RA under *zeroshot learning*. The contributions of this paper are as follows: - We present IDRISI-RA,3the first public human-labeled Arabic LMR dataset (gold version) of about 4.6k tweets. The dataset covers diverse disaster types and countries. - We release the largest automatically-labeled Arabic LMR dataset (silver version), constituting about 1.2M tweets. - We annotate the location mentions into coarseand fine-grained location types to enable hierarchical LM recognition, disambiguation, and evaluation. - We benchmark IDRISI-RA using the standard Arabic NER models and our own simple yet competitive LMR models to establish a set of baselines for the community. - We empirically demonstrate that IDRISI-RA is a reasonably generalizable dataset. The rest of the paper is organized as follows. Section 2 discusses the related work. Section 3 presents an overview of the LMR problem. Section 4 describes the design objective, dataset creation, and annotation. Section 5 analyzes the relia3https://github.com/rsuwaileh/IDRISI/ bility, coverage, and diversity of IDRISI-RA. Sections 6-7 present benchmarks and generalizability study on IDRISI-RA. Section 8 presents the silver version of IDRISI-RA. Sections 9-10 discuss a few use cases for utilizing IDRISI-RA and the ethical considerations. We finally conclude and list a few future directions in Section 11. ## 2 Related Work In this section, we review the Arabic Twitter NER and LMR datasets. We present their characteristics and issues and discuss how IDRISI-RA dataset overcomes these limitations. In Table 1, we summarize the existing NER and LMR datasets. ## 2.1 Twitter Ner Datasets An intuitive solution to tackle the LMR task is to use general-purpose NER models for LM detection or train their LMR models using Twitter NER datasets. For example, Chen et al. (2019) built a multilingual LMR system that employs a translation module at its core to process Arabic documents. The study focuses on detecting the location mentions in newswire, broadcast news and conversations, and conversational telephone speeches. Although the Arabic NER datasets could be sufficient for training acceptable LMR models at the onset of disaster events, several challenges are associated with this line of research. Despite the fact that the NER models trained on web data perform poorly on tweets (Darwish and Gao, 2014), the Arabic NER studies have a limited focus on Twitter. While this requires creating domainspecific datasets, existing public Arabic Twitter NER datasets suffer from the limited size, domain, and geographic coverage (Darwish, 2013; Aguilar et al., 2018; Jarrar et al., 2022). Darwish and Gao (2014) introduced the first Arabic Twitter NER dataset that contains 5,069 tweets and 1,300 LMs, 299 of which are unique. Aguilar et al. (2018) created a dataset that constitutes 12,334 tweets and 5,306 LMs. However, only the training and development sets are public to the research community. As of Jan 2023, we managed to crawl 11,155 tweets containing 1,412 LMs, 440 of which are unique. Recently, Jarrar et al. (2022) created the first Arabic Multi-domain nested NER dataset. It contains a subset of 5,653 tweets containing 435 entities of types LOC and GPE, 175 of which are unique. A major challenge in processing Arabic documents is handling the dialectical (colloquial) text | Dataset | Task | # Tweets | # LM (uniq) | Annotation | Public | |------------------------------|--------|------------|------------------|--------------|----------| | Darwish (2013) | NER | 5,069 | 1,300 (299) | In-house | Yes | | Aguilar et al. (2018) | NER | 11,155 | 1,412 (440) | In-house | Yes | | Jarrar et al. (2022) | NER | 5,653 | 435 (175) | In-house | Yes | | Al Emadi et al. (2017) | LMR | - | - | In-house | No | | Alkouz and Al Aghbari (2018) | LMR | 100 | - | In-house | No | | Alkouz and Al Aghbari (2020) | LMR | - | - | In-house | No | | Bahnasy et al. (2020) ∗ | LMR | 297,150 | - | Automatic | No | | IDRISI-RA_gold ∗ | LMR | 4,593 | 5,236 (918) | In-house | Yes | | IDRISI-RA_silver ∗ | LMR | 1,205,373 | 884,217 (18,609) | Automatic | Yes | commonly used over Twitter. Although the MSAEGY (Aguilar et al., 2018) dataset is of reasonable size, it is limited to only MSA and Egyptian dialects, suffering from its limited geographical coverage. Other datasets do not report their dialectical distributions. Furthermore, the Arabic NER datasets are randomly filtered using the sampling Twitter API; they are not disaster-specific datasets. These datasets could serve at the onset of disaster events for deploying acceptable LMR models but should be augmented with disaster-specific Twitter data for developing robust LMR models (Suwaileh et al., 2022). IDRISI-RA addresses these limitations by being an event-centric dataset that geographically covers all Arab countries and reasonably represents their dialects. ## 2.2 Twitter Lmr Twitter Alkouz and Al Aghbari (2018) adopted an English LMR system (Malmasi and Dras, 2015) to extract LMs from English and Arabic traffic-related tweets (filtered using traffic keywords such as "traffic" and "jam"). The system issues the n-grams extracted from the tweet text against Google Place API and assigns the latitude and longitude coordinates to n-grams that obtained results from the API. The resultant data, however, is geographically limited to United Arab Emirates (UAE). There are a couple of other cross-lingual traffic monitoring systems for English and Arabic languages (Al Emadi et al., 2017; Alkouz and Al Aghbari, 2020). However, these datasets are not public and limited in size as they contain around 500-600 tweets. Bahnasy et al. (2020) employed LM extraction to aid event detection over Arabic tweets. Although the dataset is large in size, its geographical coverage is limited to Egypt, dialectical coverage is limited to Egyptian dialect, and its disaster domain is limited to fire, flood, and pandemic disaster types only. In contrast, IDRISI-RA dataset contains other disaster events that represent the most happening disaster types in Arabic-speaking countries. These events happened in 22 different countries. IDRISI-RA also captures a good coverage of Arabic dialects. ## 3 Problem Overview In this work, we focus on the *Location Mention* Recognition (LMR) task that aims to *automatically* recongnize and extract toponyms (places or location names) from text. To distinguish the LMR from other tasks, we emphasize that the LMR task aims at removing geo/non-geo ambiguity of tokens in text. It is also known as location extraction or *geoparsing* in the literature. Differently, the Location Mention Disambiguation (LMD), which is a consecutive task for LMR, aims at removing geo/geo ambiguity between candidate LMs extracted by LMR systems. The LMD task is also known as location resolution, *location linking* (looking up a geo-positioning database), or *geocoding* (assigning geo-coordinates to LMs) in the literature. There are two types of LMR tasks. The first type recognizes toponyms (e.g., Paris, New York) without their types (e.g., city, state) and is denoted as "type-less LMR". The second type recognizes toponyms and also distinguishes between location types (e.g., country, city, and street) and is denoted as "type-based LMR" (Suwaileh et al., 2023). The latter better serves the development and evaluation of geolocation processing systems in light of the responders' needs. It enables a variety of downstream tasks (e.g., crisis maps) at different location granularity, in addition to being crucial for accurately disambiguating the toponyms. The LMR problem is formally defined as follows: given a tweet t, the LMR system aims to identify all location mentions Lt = {li;i ∈ [1, nt]} in the tweet t, where liis the i th location mention and ntis the total number of location mentions in t, if any. Each location mention may span one or more *tokens*. In this work, we follow the *BILOU* annotation scheme (bilou_tag) with 5 classes: "B" denotes the beginning token of an LM, "I" denotes a token inside an LM, "L" denotes the last token in an LM, "O" denotes a token outside of an LM, and "U" indicates that the LM has only one token such as "Doha". Therefore, we define the LMR as a *multi-class classification task on the token level*. A *type-less* LMR model predicts whether a token is part of a location mention (<bilou_tag>-LOC) or not ("O"), while a *type-based* LMR model predicts whether a token is part of LM of specific type (<bilou_tag>-<location_type>) or not ("O"). Where <location_type> is one of the 9 types presented in Section 4.2. ## 4 Dataset Construction This section discusses design objectives, data selection, and data annotation details. ## 4.1 Design Objectives And Data Selection During the development of our Arabic LMR dataset, we established five key design objectives: (1) to expand geographical coverage by incorporating as many Arab countries as possible, (2) to incorporate a variety of recurrent disaster types specific to the Arab world, (3) to ensure a balance of location granularity (i.e., cities, districts), (4) to ensure broad temporal coverage, and (5) to increase relevance to disaster response and management tasks by discerning informative content, as social media event streams are often noisy. We analyzed the existing disaster-related Twitter datasets in Arabic and selected Kawarith (Alharbi and Lee, 2021), as it contains tweets from 22 disaster events from the Arab world. We selected tweets (ids) from seven events (listed in Table 2) labeled as relevant for humanitarian purposes. The selected tweet ids (6,182) are used to download full tweet content using the Twitter API, which resulted in 4,593 tweets. ## 4.2 Dataset Annotation We perform two types of annotation on the selected data. The first involves human annotators identifying toponyms, such as geographical names of places, within the tweet text. In the second, the annotators assign location types to the identified toponyms. These location types include country, province, city/town, district, neighborhood, road/street, natural points of interest such as rivers and seas, and human-made points of interest such as schools and hospitals. Toponyms that do not belong to the defined location types (e.g., islands, villages, camps) are assigned the "other location" label. Seven graduate-level students were trained4to carry out the annotation task, voluntarily without any monetary or course credit benefits, using the WebAnno NLP annotation tool5. We selected the WebAnno tool as it supports Unicode right-to-left languages (e.g., Arabic). To ensure the quality of annotations, we selected the annotators to be either a citizen or having a good familiarity with the country of the disaster event. All annotators had to pass a quiz of 20 tweets before being eligible to start the annotation task. Disagreements between annotators were examined by an additional meta-annotator and resolved. In Table 2, columns "\# LMs (unique)", we show the total number of annotated LMs and the unique number of LMs after de-duplicating them per event in parentheses. The unique number of LMs varies according to the granularity of the affected area. On average, 26% of the LMs are unique. We further report the "Hapax" per data split. ## 5 Dataset Description And Quality In this section, we evaluate IDRISI-RA datasets for reliability, diversity, and coverage. ## 5.1 Reliability To evaluate the quality of the dataset, we compute the Inter-annotator Agreement (IAA) that quantifies the reliability of annotations. We compute Cohen's Kappa (Cohen, 1960) for both annotation tasks separately and jointly. Results in Figure 1, show the average reliability achieved is 83% (almost perfect), 67% (substantial), 70% (substantial), for LOC (i.e., toponym identification task), TYPE 4We share the annotation guidelines publicly in the GitHub repository: https://github.com/rsuwaileh/IDRISI/ 5https://webanno.github.io/ | Event | # T | # T|LM|=0 | # LMs (unique) | Hapaxtr | Hapaxdv | Hapaxts | |-----------------|-------|-------------|------------------|-----------|-----------|-----------| | Jordan FLD 2018 | 527 | 107 | 897 (108) | 2 | 5 | 0 | | Kuwait FLD 2018 | 1,269 | 503 | 1,137 (140) | 25 | 1 | 11 | | Cairo BMB 2019 | 268 | 1 | 623 (024) | 5 | 1 | 1 | | Hafr FLD 2019 | 514 | 46 | 752 (112) | 27 | 6 | 5 | | Dragon STR 2020 | 305 | 122 | 338 (160) | 45 | 4 | 7 | | Beirut BMB 2020 | 349 | 63 | 550 (061) | 9 | 0 | 9 | | CoVID-19 | 1,361 | 777 | 939 (313) | 70 | 15 | 24 | | Total | 4,593 | 1,619 | 5,236 (918) | 183 | 32 | 57 | Table 2: Statistics of tweets (referred as T) in IDRISI-RA dataset. "Hapax" refers to the number of LMs that appear once in training (tr), development (dv), and test (ts) sets. ![4_image_0.png](4_image_0.png) (i.e., location type assignment), and LOC+TYPE, respectively. All events show high-quality annotations, except the "Hafr Floods 2019" event with 12% agreement for the TYPE task (slight reliability) and 44% for LOC+TYPE (moderate reliability). Upon investigation, we found that "Hafr Albatten" is the most frequent LM in the dataset; one annotator assigns "city" type for all occurrences, and the other assigns "province". While both annotators are correct (as in the Arab world, both types are used interchangeably), we anticipate the agreement level to increase when accepting both types. Furthermore, the COVID-19 event shows slight agreement for the TYPE task due to similar reasons across Arab countries. ## 5.2 Coverage And Diversity In this section, we discuss how IDRISI-RA satisfies the design objectives presented in Section 4.1. Geographical Coverage: IDRISI-RA covers 5 distinct Arab countries, namely Jordan, Kuwait, Egypt, Saudi Arabia, Lebanon. Additionally, the whole Arab world is represented by COVID-19 pandemic event. Figure 2a shows the distribution of distinct LMs per location type. ![4_image_1.png](4_image_1.png) Domain Coverage: IDRISI-RA represents the most happening disaster types in the Arab world that are discussed over Twitter (Alabbas et al., 2017; Alharbi and Lee, 2019; Ameen et al., 2020; Alharbi and Lee, 2021), including 3 floods, 2 explosions, 1 storm, and the global COVID-19 pandemic. Location Types Coverage: Figure 2b shows the distribution of the location types. It is evident that the coarse-grained (e.g., Country, State, and City) LMs dominate the dataset due to Kawarith collection strategy that depends on tracking relevant keywords, mostly hashtags which are the names of the coarse affected areas by the disaster event. Temporal Coverage: IDRISI-RA covers recent disaster events happened between 2018-2020. The time span for events is approximately 8.8 days, on average (refer to Table 6 in Appendix A). In Figure 5 in Appendix A, we depict the number of tweets during two events showing the coverage over important developments. Dialectical Distribution: To analyze the distribution of dialects vs. MSA in IDRISI-RA, we employed the *ASAD* dialectical classifier (Hassan et al., 2021). We found around 86.8% of the tweets in the dataset are MSA. Table 3 shows the dialectical distribution of around 13% tweets. The largest portion goes to the Egyptian dialect as Cairo BMB 2019 and Dragon STR 2020 happened in Egypt. The next dialect is Kuwaiti as Kuwait FLD 2018 event contains the second top number of tweets. Qatari and Saudi dialects are very close to the Kuwaiti dialect which explains their prevalence. EG KW QA SA MA LB 27.2% 26.6% 8.7% 7.4% 4.0% 3.8% PS BH JO LY AE SD 3.6% 3.4% 3.2% 2.3% 2.1% 2.1% TN DZ OM YE IQ SY 1.7% 1.1% 1.1% 0.8% 0.6% 0.6% Table 3: The dialects distribution in IDRISI-RA. The 18 countries are represented by their 2-letter ISO codes.6 ## 6 Benchmarking Experiments In this section, we discuss IDRISI-RA benchmarking experiments that we conducted to provide baselines for the research community. ## 6.1 Experimental Setup We benchmark IDRISI-RA datasets under different tasks, data, and domain setups. The LMR task setups are *type-less* and *type-based*. The data setups are *(i) random*, where we ignore tweets posting timestamp, and *(ii) time-based*, where tweets are chronologically ordered. Tweets are randomly shuffled and split into 70% training, 10% development, and 20% test sets, per event. We report the detailed stats in Table 6 in Appendix A. ## 6.2 Learning Models We employ one deep learning-based and three traditional machine learning models for benchmarking, as described below. 6en.wikipedia.org/wiki/ISO_3166-1_alpha-2 - *CAMeLBERT-Mix* (CML) (Inoue et al., 2021): An NER model that is trained on ANERcorp dataset, including MSA, Dialectal Arabic (DA), and Classical Arabic (CA) data. - *Farasa* (FRS) (Abdelali et al., 2016): A commonly used NER model in Farasa Arabic tool. - *Conditional Random Fields* (CRF) (Lafferty et al., 2001): We employ CRF due to its competitive performance in the NER task. We used word syntactic features, including the suffix, POS tag, and the context (adjacent words and their syntactical features). - *BERT-based* (BRT) (Abdul-Mageed et al., 2020): We selected MARBERT model for its superiority in Arabic Twitter NER when used for embeddings (Benali et al., 2022). All these scientific artifacts are used according to their terms and conditions for research purposes. ## 6.3 Hyperparameter Tuning During training, we tuned the hyperparameters of the BERT-based model including the sequence length, the batch size, the number of training epochs, and the learning rate in light of the recommended values by (Devlin et al., 2019). For the CRF-based models, we experimented with five available training algorithms and their hyperparameters. We report the full details on hyperparameters tuning in Appendix B. ## 6.4 Evaluation Measures To evaluate the LMR models, we compute the harmonic mean (F1 score) of Precision (P) and Recall (R). We extended the *seqeval (v1.2.2)* package 7 to evaluate the models per tweet and report the average performance, and reward the models when correctly predicting no LMs for a tweet. ## 6.5 Results Type-less LMR: Table 4 presents the F1 results of all type-less models. The detailed results including precision and recall are in Appendix B. MARBERT-based model (BRT) achieves the best performance for both *random* and *time-based* scenarios. Next in order are the CRF, FARASA (FRS), and CAMeLBERT-Mix (CML). Although the CAMeLBERT-Mix is considered a BERT-based model, it shows poor performance compared to MARBERT-based model, as it was fine-tuned on 7https://github.com/chakki-works/seqeval news wire documents for the NER (entities include LOC, ORG, PER, and MISC) task. Type-based LMR: Results in Table 4 show that the MARBERT-based LMR is evidently the best model for the *random* data setup. We anticipate the reason behind the lower performance of the CRF model to be the limited features used to train the Arabic version (refer to Section 6.2). The CRF model exhibits comparable F1 scores to the MARBERTbased model in the *time-based* data setup. To answer RQ1, we confirm the need for specific-LMR datasets and models that can perform effectively over disaster tweets. ## 7 Dataset Generalizability In this section, we empirically study the generalizability of IDRISI-RA dataset. For that, we employ the best LMR model, MARBERT-based (refer to Section 6); hereafter, we refer to it as "the model". We study three dimensions: (i) **generalizability to** unseen events regardless of their type and geolocation, *(ii)* generalizability to unseen events of the same or different disaster types (**domain generalizability**), and *(iii)* generalizability to unseen events that happened in the same or different countries (**geographical generalizability**). ## 7.1 Experimental Setups We run our experiments under both *type-less* and type-based task setups for only *random* data setup. We tune the hyperparameters of the model for every setup (refer to Section 6.3). We define the source dataset as the dataset (or the combination of datasets) used to *train* the model, and the target dataset as the dataset used to *test* it. Domain generalizability: We examine the model's performance under cross- and in-domain transfer setups (Suwaileh et al., 2020). The "domain" in our experiments refers to the type of disaster event. IDRISI-RA dataset covers the four most occurring disaster types in the Arab world: flood, bombing, storm, and pandemic. A transfer data setup is composed of source-target pair, resulting in 16 runs. Geographical generalizability: We examine the model's performance over events that took place in different countries than the source dataset. IDRISIRA covers five countries (refer to Table 6 in Appendix A), besides the global COVID-19. A transfer data setup is composed of source-target pair, resulting in 42 runs after excluding the *target* runs. ## 7.2 Results Generalizability to unseen events: Table 5 shows the results of the model both with and without (i.e., zero-shot) the target event. The results for the *typeless* LMR demonstrate the potential of IDRISI-RA dataset under the zero-shot setting. The difference against the target runs is mostly negligible. Due to the difficulty of the *type-based* LMR, the performance under zero-shot learning is significantly lower than the target runs. However, the zero-shot results are still within a reasonable range (i.e., average F1 0.88), which demonstrates the effectiveness of models trained on IDRISI-RA. To answer RQ2, we confirm that training on IDRISI-RA generates generalizable Arabic LMR models that achieve, on average, around 0.75 and 0.88 F1 scores for the type-less and *type-based* LMR, respectively. Domain generalizability: Figure 3 illustrates the F1 scores of the models over the target sets. ![6_image_0.png](6_image_0.png) BMB FLD PND STR BMB FLD PND STR In-Domain: Ideally, the best results should lay on the diagonal which depicts the in-domain setup. This assumption holds for all runs, except the STR-to-STR runs in the *type-based* LMR (Figure 3.b). Training on "bombing" data in the BMBto-STR setup achieves comparable results to training on "storm" data in the STR-to-STR, because both source and target data share the same or close affected areas (Egypt and Lebanon), which could imply the overlap of toponyms' occurrences and patterns. The "bombing" (BMB) includes data from the *Cairo Bombing 2019* in Egypt and the Beirut Explosion 2020 in Lebanon. The "storm" (STR) test data contains Dragon storms 2020 that affected Egypt and Jordan, among a few LMs from Levantine Arabic. Moreover, the FLD-to-STR run achieves 6.3% better performance compared to the STR-to-STR run, as the FLD source data is approximately 7.5 times larger in size than the "storm" STR source. The effect of training dataset size | LMR setup | Type-less | Type-based | | | | | | | | | |-----------------|-------------------------|-------------------------|-------------|-------------|-----|-----|-----|-----|-----|-----| | Data setup | Random | Time-based | Random | Time-based | | | | | | | | Event | CML FRS | CRF | BRT | CML FRS | CRF | BRT | CRF | BRT | CRF | BRT | | Jordan FLD 2018 | 0.517 0.650 0.843 0.953 | 0.491 0.641 0.776 0.903 | 0.837 0.908 | 0.775 0.862 | | | | | | | | Kuwait FLD 2018 | 0.320 0.688 0.711 0.928 | 0.294 0.625 0.644 0.893 | 0.904 0.925 | 0.891 0.879 | | | | | | | | Cairo BMB 2019 | 0.237 0.058 0.968 0.989 | 0.250 0.083 0.933 0.936 | 0.708 0.975 | 0.737 0.931 | | | | | | | | Hafr FLD 2019 | 0.303 0.286 0.838 0.879 | 0.319 0.276 0.829 0.878 | 0.859 0.856 | 0.882 0.838 | | | | | | | | Dragon STR 2020 | 0.579 0.737 0.698 0.870 | 0.615 0.702 0.611 0.869 | 0.872 0.787 | 0.880 0.714 | | | | | | | | Beirut BMB 2020 | 0.539 0.493 0.873 0.855 | 0.520 0.710 0.772 0.582 | 0.701 0.813 | 0.621 0.596 | | | | | | | | CoVID-19 | 0.238 0.845 0.640 0.881 | 0.266 0.800 0.634 0.897 | 0.928 0.893 | 0.901 0.886 | | | | | | | | Average | 0.390 0.537 0.796 0.908 | 0.394 0.548 0.743 0.851 | 0.830 0.880 | 0.812 0.815 | | | | | | | Table 4: The F1 results for the LMR models on IDRISI-RA. | Data setup | Random | Time-based | | |----------------------------|-------------|--------------|--------| | Training setup | Zero | Target Zero | Target | | Type-less Jordan FLD 2018 | 0.768 0.765 | 0.759 0.751 | | | Kuwait FLD 2018 | 0.853 0.848 | 0.830 0.829 | | | Cairo BMB 2019 | 0.642 0.632 | 0.651 0.626 | | | Hafr FLD 2019 | 0.761 0.762 | 0.754 0.747 | | | Dragon STR 2020 | 0.809 0.814 | 0.829 0.825 | | | Beirut BMB 2020 | 0.616 0.633 | 0.594 0.603 | | | COVID-19 | 0.879 0.883 | 0.842 0.853 | | | Average | 0.761 0.762 | 0.751 0.748 | | | Type-based Jordan FLD 2018 | 0.900 0.967 | 0.907 0.957 | | | Kuwait FLD 2018 | 0.956 0.982 | 0.955 0.972 | | | Cairo BMB 2019 | 0.835 0.992 | 0.805 0.991 | | | Hafr FLD 2019 | 0.786 0.971 | 0.763 0.965 | | | Dragon STR 2020 | 0.941 0.946 | 0.947 0.950 | | | Beirut BMB 2020 | 0.914 0.936 | 0.841 0.851 | | | COVID-19 | 0.961 0.972 | 0.960 0.964 | | | Average | 0.899 0.967 | 0.883 0.950 | | Table 5: The F1 results for the MARBERT-based LMR model under *zero-* and *target* training setups. on these results could be confirmed by the relatively low F1 scores when the model trained on the "storm" data that has the smallest training data. Cross-Domain: Generally, the right upper part above the diagonal shows better results than the counterpart, except for the BMB-to-FLD where the size of training data influences the results. We also note here that the model is tuned for every sourceto-target transfer setup over the development splits, hence, the poor results on the test splits could indicate overfitting that prevents generalizability. This motivates the use of more advanced transfer learning techniques. To answer RQ3, we confirm that IDRISI-RA can generate acceptable domain generalizable models for the most disaster types. It also provides challenging examples for the LMR models. Geographical generalizability: Figure 4 shows the F1 scores of the models over the target countries that are the same or different than the affected area of the source data. The model achieves approximately 0.61 and 0.84 F1 scores, on average, for type-less and *type-based* LMR, respectively. The top performance is achieved over "GL" target data that refers to the COVID-19 event due to its geographical coverage. To answer RQ4, we found that IDRISI-RA can generate reasonable geographically generalizable models. ## 8 Silver Idrisi-Ra Given the dearth of Arabic LMR datasets, we expand the size of IDRISI-RA dataset by employing the best-performing LMR model to infer labels for 1.2 million Arabic tweets posted during 22 disaster events. We call the resulting data as the *silver* version to indicate the reliability of annotations. The inference model is trained using the entire *type-based* and *type-less* train sets from all events. All development partitions are used for tuning the hyperparameters of the MARBERT-based model. The model applied to the tweets resulted in 884,217 LMs (18,609 distinct). We note that this large dataset can potentially be useful for a variety of LMR approaches, e.g., domain adaptation, transfer learning, and semi-supervised learning. ![8_image_0.png](8_image_0.png) JO KW EG SA EG & JO LB GL AVG **Source** ## 9 Research Use Cases Releasing IDRISI-RA empowers research on different applications, other than the geolocation. In this sections, we discuss a few use cases. Event/incident detection: People tend to mention where events/incidents take place when they report them (Hu and Wang, 2020). Following (Sankaranarayanan et al., 2009) and (Watanabe et al., 2011), IDRISI-RA can be employed to detect event/incident on Twitter by extracting LMs. Relevance filtering: Prior studies show that the geographical references in social media messages could indicate their relevance and informativeness for a disaster event (Vieweg et al., 2010; De Albuquerque et al., 2015). Therefore, IDRISI-RA can be utilized to train relevance filtering models. Displacement monitoring: Extracting LMs from tweets shared by displaced people and refugees can help predicting their future paths. In fact, monitor the population movement during emergencies is very useful for responders. Thus, IDRISI-RA can be exploited in modeling the population movement. Geographical retrieval: IDRISI-RA could serve the GIR retrieval techniques that rely on detecting locations and spatial references in queries and documents (García-Cumbreras et al., 2009). ## 10 Ethical Considerations While Twitter allows users to self-manage their privacy by disabling the geo-tagging functionalities, "even well informed and rational individuals cannot appropriately self-manage their privacy" (Solove, 2012) due to lack of awareness on how such data can be collected or commercially used. However, geolocation extraction could be justified in the context of social good during *natural* disaster events where protecting geographical privacy could be of least importance as affected people need to rescue their lives or get basic necessities of life (Crawford and Finn, 2015). The privacy of affected people during *human-made* disaster events is more critical because revealing their locations during conflicts and wars could risk their lives. Therefore, we limit our dataset to *natural* disaster events and COVID19 pandemic and apply a couple of de-identification steps to protect users' privacy (refer to Section A). We further limit the dataset usage to research purposes only by releasing it under the Creative Commons Attribution 4.0 International License.8 Additionally, we emphasize that systems developed for LMR using IDRISI-RA dataset should implement proper mechanism for preserving user privacy. ## 11 Conclusion We introduced IDRISI-RA, the first Arabic LMR Twitter dataset. It contains 22 disaster events of different types that happened in the Arab region. We manually- (gold) and automatically annotated (silver) about 4.6K and 1.2M tweets. Both versions are annotated for location mentions and location types which form the value and uniqueness of IDRISIRA. Our analysis showed that IDRISI-RA is second to none in empowering research for Arabic LMR. Additionally, the extensive experiments emphasize the need for developing LMR-specific models for the disaster domain. The developed LMR baselines are simple yet competitive ones. The results also demonstrated the decent generalizability of IDRISI-RA. For future work, we plan to extend the annotations for the Location Mention Disambiguation (LMD) task. We further plan to explore different transfer learning, domain adaptation, and active learning techniques to tackle the LMR task. ## Limitations There are a few shortcomings that we discuss below: Underrepresented fine-grained LMs: Although we had chosen a careful sampling method focused of event-centric informative dataset aiming to increase the likelihood of fine-grained LMs' occurrence (Kitamoto and Sagara, 2012), we think the low frequency of fine-grained LMs in IDRISI-RA is a major limitation as it contains solely 25.5% fine-gained LMs. Human errors: There are some human errors made during annotation due to the difficulty of the task. - Annotators sometimes fail in distinguishing between *Location* and *Organization* entities (e.g., "Red Cross"). - Different location types could be used interchangeably for the same locations which forms a difficulty for annotators (refer to Section 5.1). - Annotators highlight the locations that are mentioned as descriptions within the context of the tweet. We plan to overcome these errors as part of Location Mention Disambiguation (LMD) annotation that aim to remove ambiguity of geo/geo entities (as a sequel to the geo/non-geo LMR annotations). Temporary locations: Temporary facilities (i.e., medical camps, shelters, etc.) are constructed during emergencies to provide resources and support for the affected people. However, these facilities could be disassembled (e.q., quarantine centers) once the emergency event is over. Additionally, the names of some locations could change during emergencies. For example, allocating a specific school as a shelter and giving it a new expressive name (e.g., "main shelter"). Once the disaster event is over, the school will return to providing its original services. The difficulty of these temporary locations lies in their need for context when resolved. Although they are important for the affected people and response authorities, not all of them are labeled in IDRISI-RA. Generalizability: Due to the absence of *public* LMR datasets, we could not compare the generalizability of IDRISI-RA against existing LMR datasets. Hence, we study the generalizability within IDRISI-RA for domain and geographical aspects. ## Acknowledgements This work was made possible by the Graduate Sponsorship Research Award (GSRA) \#GSRA51-0527-18082 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. We would like to thank the in-house annotators for their valuable help including Noura Abdullah, Aisha Suwaileh, Rasha Hamdoon, Najlaa Alfuhaida, Hana Shamayleh, Lamiaa Basyoni, Sara Alrasbi, and Nada Abo Eita. ## References Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In *NAACL*. Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2020. Arbert & marbert: Deep bidirectional transformers for arabic. arXiv preprint arXiv:2101.01785. Gustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. In *Proceedings of the Third Workshop on Computational* Approaches to Linguistic Code-Switching, pages 138– 147, Melbourne, Australia. Association for Computational Linguistics. Noora Al Emadi, Sofiane Abbar, Javier BorgeHolthoefer, Francisco Guzman, and Fabrizio Sebastiani. 2017. Qt2s: A system for monitoring road traffic via fine grounding of tweets. In *Proceedings* of the International AAAI Conference on Web and Social Media, volume 11. Waleed Alabbas, Haider M. al Khateeb, Ali Mansour, Gregory Epiphaniou, and Ingo Frommholz. 2017. Classification of colloquial arabic tweets in real-time to detect high-risk floods. In *2017 International Conference On Social Media, Wearable And Web Analytics (Social Media)*, pages 1–8. Alaa Alharbi and Mark Lee. 2019. Crisis detection from arabic tweets. In Proceedings of the 3rd workshop on arabic corpus linguistics, pages 72–79. Alaa Alharbi and Mark Lee. 2021. Kawarith: an Arabic Twitter corpus for crisis events. In *Proceedings of the* Sixth Arabic Natural Language Processing Workshop, pages 42–52, Kyiv, Ukraine (Virtual). Association for Computational Linguistics. Balsam Alkouz and Zaher Al Aghbari. 2018. Leveraging cross-lingual tweets in location recognition. In *2018 IEEE International Conference on Electro/Information Technology (EIT)*, pages 0084–0089. IEEE. Balsam Alkouz and Zaher Al Aghbari. 2020. SNSJam: Road traffic analysis and prediction by fusing data from multiple social networks. *Information Processing and Management*, 57(1):102139. Yasmeen Ali Ameen, Khaled Bahnasy, and Adel E. Elmahdy. 2020. Classification of arabic tweets for damage event detection. International Journal of Scientific & Engineering Research, 11. Khaled Bahnasy, Adel El-Mahdy, et al. 2020. Twitter analysis based on damage detection and geoparsing for event mapping management. Future Computing and Informatics Journal, 5(1):1. Brahim Ait Benali, Soukaina Mihi, Nabil Laachfoubi, and Addi Ait Mlouk. 2022. Arabic named entity recognition in arabic tweets using bert-based models. Procedia Computer Science, 203:733–738. Xu Chen, Judith Gelernter, Han Zhang, and Jin Liu. 2019. Multi-lingual geoparsing based on machine translation. *Future Generation Computer Systems*, 96:667–677. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and psychological measurement*, 20(1):37–46. Kate Crawford and Megan Finn. 2015. The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters. GeoJournal, 80:491–502. Kareem Darwish. 2013. Named entity recognition using cross-lingual resources: Arabic as an example. In *Proceedings of the 51st Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 1558–1567. Kareem Darwish and Wei Gao. 2014. Simple effective microblog named entity recognition: Arabic as an example. In *Proceedings of the Ninth International* Conference on Language Resources and Evaluation (LREC'14), pages 2513–2517. Joao Porto De Albuquerque, Benjamin Herfort, Alexander Brenning, and Alexander Zipf. 2015. A geographic approach for combining social media and authoritative data towards identifying useful information for disaster management. *International journal* of geographical information science, 29(4):667–689. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Miguel Á García-Cumbreras, José M Perea-Ortega, Manuel García-Vega, and L Alfonso Ureña-López. 2009. Information retrieval with geographical references. relevant documents filtering vs. query expansion. *Information processing & management*, 45(5):605–614. Rob Grace, Jess Kropczynski, and Andrea Tapia. 2018. Community coordination: Aligning social media use in community emergency management. In *Proceedings of the 15th ISCRAM Conference*. Sabit Hassan, Hamdy Mubarak, Ahmed Abdelali, and Kareem Darwish. 2021. Asad: Arabic social media analytics and understanding. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 113–118. Yingjie Hu and Jimin Wang. 2020. How Do People Describe Locations during a Natural Disaster: An Analysis of Tweets from Hurricane Harvey. *Leibniz International Proceedings in Informatics, LIPIcs*, 177. Go Inoue, Bashar Alhafni, Nurpeiis Baimukan, Houda Bouamor, and Nizar Habash. 2021. The interplay of variant, size, and task type in Arabic pre-trained language models. In *Proceedings of the Sixth Arabic Natural Language Processing Workshop*, Kyiv, Ukraine (Online). Association for Computational Linguistics. Mustafa Jarrar, Mohammed Khalilia, and Sana Ghanem. 2022. Wojood: Nested arabic named entity corpus and recognition using bert. In *Proceedings of the International Conference on Language Resources and* Evaluation (LREC 2022), Marseille, France, June. Asanobu Kitamoto and Takeshi Sagara. 2012. Toponymbased geotagging for observing precipitation from social and scientific data streams. In Proceedings of the ACM Multimedia 2012 Workshop on Geotagging and Its Applications in Multimedia, GeoMM '12, page 23–26, New York, NY, USA. Association for Computing Machinery. Jessica Kropczynski, Rob Grace, Julien Coche, Shane Halse, Eric Obeysekare, Aurelie Montarnal, Frederick Benaben, and Andrea Tapia. 2018. Identifying Actionable Information on Social Media for Emergency Dispatch. In ISCRAM Asia Pacific 2018: Innovating for Resilience - 1st International Conference on Information Systems for Crisis Response and Management Asia Pacific., pages p.428–438, Wellington, New Zealand. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning 2001 (ICML 2001), pages 282– 289. Shervin Malmasi and Mark Dras. 2015. Location mention detection in tweets and microblogs. In Conference of the Pacific Association for Computational Linguistics, pages 123–134. Springer. Sabrina McCormick. 2016. New tools for emergency managers: an assessment of obstacles to use and implementation. *Disasters*, 40(2):207–225. Christian Reuter, Thomas Ludwig, Marc-André Kaufhold, and Thomas Spielhofer. 2016. Emergency services' attitudes towards social media: A quantitative and qualitative survey across europe. *International Journal of Human-Computer Studies*, 95:96– 111. Jagan Sankaranarayanan, Hanan Samet, Benjamin E Teitler, Michael D Lieberman, and Jon Sperling. 2009. Twitterstand: news in tweets. In *Proceedings of the 17th acm sigspatial international conference on advances in geographic information systems*, pages 42–51. Daniel J Solove. 2012. Introduction: Privacy selfmanagement and the consent dilemma. *Harv. L. Rev.*, 126:1880. Reem Suwaileh, Tamer Elsayed, and Muhammad Imran. 2023. IDRISI-RE: A generalizable dataset with benchmarks for location mention recognition on disaster tweets. *Information Processing & Management*, 60(3):103340. Reem Suwaileh, Tamer Elsayed, Muhammad Imran, and Hassan Sajjad. 2022. When a disaster happens, we are ready: Location mention recognition from crisis tweets. International Journal of Disaster Risk Reduction, page 103107. Reem Suwaileh, Muhammad Imran, Tamer Elsayed, and Hassan Sajjad. 2020. Are we ready for this disaster? towards location mention recognition from crisis tweets. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 6252– 6263. Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010. Microblogging during two natural hazards events: what twitter may contribute to situational awareness. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 1079–1088. Kazufumi Watanabe, Masanao Ochi, Makoto Okabe, and Rikio Onai. 2011. Jasmine: a real-time localevent detection system based on geolocation information propagated to microblogs. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 2541–2544. ## A Data Release The IDRISI-RA dataset is released9 data setups that are *random* and *Time-based*. The location mention and location type annotations are made available for the community to enable development of *type-less* and *type-based* LMR models. The data is released in **JSONL** format where every lines 9This dataset is licensed under the Creative Commons Attribution 4.0 International License: https:// creativecommons.org/licenses/by/4.0/legalcode ![11_image_0.png](11_image_0.png) Figure 5: The temporal coverage of tweets in IDRISIRA. corresponds to one tweet with the following properties: "text", "created_at", "info_class" adopted from Kwaraith dataset, and "location_mentions". We processed the data to de-identify it as follows: - We do not release the the user identifiers, i.e., "user_id". - We replace the user mentions (i.e.,"@") in the tweet text by "@0" of the same length as the mention length. For example, if the mention is "@someuser", we replace it with "@00000000". - We keep the tweet ids, i.e., "id", to allow recrawling tweets for extracting more information, e.g., meta data, social network properties, etc. This allows developing LMR models that utilize different features beyond the textual content. Table 6 shows the detailed statistics of IDRISIRA dataset for the random and time-based setups per event. Figure 6 presents example Arabic tweets translated into English and highlights different types of LMs. In Figure 5, we depict the temporal coverage of COVID-19 and Kuwait FLD 2018 events. ## B Hyperparameter Tuning And Results For the BERT-based models, we tuned the sequence length, the batch size, the number of training epochs, and the learning rate in light of the recommended values by (Devlin et al., 2019) as: batch size of 8, 16 or 32, number of epochs of 2, 3, Event Arabic tweet English Translation (Google) | إخلاء عائلة إ | | | | |-----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|---------| | ᣠ م | | | | | خᘭم ال᜻رامة أماᜧن أخرى | | | | | ᗷ | | | | | | | | | | 2018 | | | | | FLD | | | | | Jordan | ᢝ ᡧ | ال᜻ | | | ش | | | | | ᣛ ال | | | | | شونة | Evacuation of a family to Al-Karama Scout | | | | ال | | | | | جن | | | | | ᘭᗖᖔة منطقة #الاردن | | | | | # | | | | | سيول_الاردن [راᗷ | | | | | ط]ز | CampOther location in Southern ShunaDistrict #jordan #jordan_floods [URL] | | | | هيئة ال | | | | | طرق: تم | | | | | س | | | | | ح | | | | | ب مᘭاه نف | | | | | ق المنق | | | | | ف طᗫᖁقͭشارع | | | | | | | | | | 2018 | | | | | FLD | | | | | Kuwait | ᗷالᝣامل وسᚏتم افتتا | | | | حه لمرتاد | | | | | ي ال | | | | | طرق ᗷال | | | | | ساعا | | | | | ت | القادمة #ال᜻ | | | | ᗫᖔ | | | | | ت دولة #امطار_ال᜻ | | | | | ᗫᖔ | | | | | ت | Roads Authority: The Mangaf tunnelRoad/street has been completely withdrawn, and it will be opened to road users in the coming hours #kuwaitCountry #kuwait_floods | | | | ᢝ | | | | | عاجل| | | | | | #ال | | | | | ص | | | | | حة: | | | | | 17 | | | | | لانف | | | | | جار | | | | | | | | | | 2019 | | | | | BMB | | | | | Cairo | ال | | | | ح | | | | | ᣆ المᘘد | | | | | ᣍ᡽ | ᢝ ᡧ ᣚ اᗷ | م | | | صا ᠐ | المعهد القو | | | | م من صنع الإᙏسان ل عْ م [راᗷ | | | | | ط] | َ | ᣤᢝ للأورام ل عْ م | | | | َ | #معهد_الأورام | | | م من صنع الإᙏسان ᠐ | Urgent | #health_authority: 17 injured in the initial count of the National Cancer InstituteHuman-made POI explosion [URL] #cancer_institueHuman-made POI | | | | منه | | | | | الله ᛒ | | | | | س | | | | | ᣂ ᡨ | وال | | | | عليهم | ᣢᢝ قᗫᖁ | | | | Hafr FLD 2019 | ةᘌ | ال | | | خالد | | | | | ب | ᣐᢝ | ᣐᢝ المفرو | | | ض يها | | | | | جرون موقعهم | | | | | خطر مرة | | | | | وال | | | | | سيول جتهم | من ᛿ل جهة | | | | ዘዙዚዛዜዝዞዟዡዢዣዤ# | | | | | حفرالᘘا | | | | | طن_الان | Al-Khalidiyah neighborhoodNeighborhood and the one nearby May God protect them They are supposed to migrate Their site is dangerous And torrents came from every direction ዘዙዚዛዜዝዞዟዡዢዣዤ #hafar_albatin_now | | | | #انف | | | | | جار_مرفا_ب | | | | | ᣂᢕو | | | | | ت | | | | | | | | | | 2020 | | | | | BMB | | | | | Beirut | ᡧ | | | | طبᘭ | | | | | طال | ᠐ | لبنان | حᗫᖁ | | | ق كب | | | | | ᣚᣂᢕ | | | | | جᘘل م | | | | | شغرة | | | | | م ل عْ َ | | # | | | ᢝ | | | | | ᣙᢝ | م | ᣞᢝ | المنازل | | #ب | | | | | ᣂᢕو | | | | | ت_عم_تᘘ | #beirut_port_explusion #lebanon | A big fire in Mashghara MountainNatural_POI affected homes #beirut_crying | | | | | | | | Figure 6: Tweet Example from IDRISI-RA. Highlighted text and subscripts refer to LMs and their location types. or 4, and learning rate of 5E-5, 3E-5, or 2E-5. We also experimented with a sequence length of 128 or 256. For the CRF-based models, we tuned five algorithms, namely Gradient Descent using the LBFGS method (LBFGS), Stochastic Gradient Descent with L2 regularization term (L2EG), Averaged Perceptron (AP), Passive Aggressive (PA), and Adaptive Regularization Of Weight Vector (AROW). For *LBFGS*, we tuned the coefficients for L1 and L2 regularization parameters. For the L2EG, we tuned the the coefficient for L2 regularization and the initial value of learning rate used for calibration. For AP, we tuned the epsilon parameter that determines the condition of convergence. For PA, we tuned the strategy for updating feature weights and the sensitivity parameter that determine whether errors are considered in the objective function. For *AROW*, we tuned the initial variance of every feature weight and the tradeoff between loss function and changes of feature weights (gamma). The regularization parameters are tuned for values between 0.05 and 1 with step value of 0.05. The initial learning rate and epsilon are tuned using values {1 × 10i|i ∈ [2, 6]}. The PA sensitivity parameter is boolean and the updating strategy includes three types: without slack variables, type I, or type II. The variance and gamma parameters of *AROW* algorithm are tuned for values {2−i|i ∈ [0, 3]}. We ran our hyperparameter tuning experiments for about 3 days on a cluster of 46 GPUs of different NVIDIA models including p100, v100, v100- NVLINK, and T4. Tables 7 and 8 show the best hyper-parameters and detailed results for the CRF and BERT-based LMR models for IDRISI-RA, respectively. Tables 9 shows the best hyper-parameters of the models under *disaster domain transfer* setting. | Table 6: Detailed information and statistics of IDRISI-RA datasets, both random and time-based setups. "TRN", "DEV", "TST" refer to the training, development, and test splits, respectively. LM0 refers to the number of tweets with no LMs. | aWhile Dragon storms have affected several Arab countries and LMs are from different countries, Kawarith creators had intentionally focused on the Egyptian tweets. We found LMs from Jordan as well. bThe last tweet in the chronologically sorted tweets was published in 2020/06/15 but the pandemic was ongoing. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Event Country Time Period TRN DEV TST All LM0 TRN DEV TST All Uniq Random | Jordan FLD 2018 JO 10/25 - 11/18 371 53 103 527 107 614 89 194 897 108 Kuwait FLD 2018 KW 11/04 - 11/18 889 127 253 1,269 503 820 93 224 1,137 140 Cairo BMB 2019 EG 08/04 - 08/04 189 27 52 268 1 417 66 140 623 24 Hafr FLD 2019 SA 10/25 - 10/29 364 52 98 514 46 520 83 149 752 112 Dragon STR 2020 EG & JO a 03/11 - 03/15 217 31 57 305 122 252 27 59 338 160 Beirut BMB 2020 LB 08/04 - 08/07 245 35 69 349 63 378 49 123 550 61 CoVID-19 Global 05/17 - –/– b 959 137 265 1,361 777 637 96 206 939 313 Time-based Jordan FLD 2018 JO 10/25 - 11/18 371 53 103 527 107 631 79 187 897 108 Kuwait FLD 2018 KW 04/11 - 04/18 889 127 253 1269 503 772 112 253 1,137 140 Cairo BMB 2019 EG 08/04 - 08/04 189 27 52 268 1 458 53 112 623 24 Hafr FLD 2019 SA 10/25 - 10/29 364 52 98 514 46 526 79 147 752 112 Dragon STR 2020 EG & JO a 03/11 - 03/15 217 31 57 305 122 248 39 51 338 160 Beirut BMB 2020 LB 08/04 - 08/07 245 35 69 349 63 400 55 95 550 61 CoVID-19 Global 05/17 - –/– b 959 137 265 1361 777 599 109 231 939 313 3,234 462 897 4,593 1,619 3,634 526 1,076 5,236 918 | | LMs Tweets | All - - | | Event | Algo. | HP1 | HP2 | P | R | F1 | |-------------------------------------------------------------------------------------|------------|--------------|----------------------|-------|-------|-------| | Random data setup Type-less LMR Jordan Floods arow variance=0.1 | gamma=0.16 | 0.841 | 0.845 | 0.843 | | | | Kuwait Floods | arow | variance=1 | gamma=0.125 | 0.865 | 0.603 | 0.711 | | Cairo Bombing | l2sgd | c2=0.2 | ce=1e-2 | 0.971 | 0.964 | 0.968 | | Hafr Floods | lbfgs | c1=0.05 | c2=0.05 | 0.881 | 0.799 | 0.838 | | Dragon Storms | pa | c=1 | error_sensitive=TRUE | 0.787 | 0.627 | 0.698 | | Beirut Explosion | arow | variance=0.1 | gamma=0.5 | 0.943 | 0.813 | 0.873 | | CoVID-19 | arow | variance=1 | gamma=1 | 0.787 | 0.539 | 0.640 | | Type-based LMR under random data setup Jordan Floods lbfgs c1=0.25 c2=0.25 | 0.766 | 0.786 | 0.776 | | | | | Kuwait Floods | arow | variance=0.5 | gamma=0.1 | 0.822 | 0.530 | 0.644 | | Cairo Bombing | l2sgd | c2=0.9 | ce=1e-4 | 0.929 | 0.938 | 0.933 | | Hafr Floods | l2sgd | c2=0.5 | ce=1e-4 | 0.891 | 0.776 | 0.829 | | Dragon Storms | ap | epsilon=1e-5 | - | 0.659 | 0.569 | 0.611 | | Beirut Explosion | l2sgd | c2=0.15 | ce=1e-2 | 0.692 | 0.874 | 0.772 | | CoVID-19 | arow | variance=0.1 | gamma=0.125 | 0.676 | 0.597 | 0.634 | | Type-less LMR under time-based data setup Jordan Floods pa c=0 error_sensitive=TRUE | 0.837 | 0.837 | 0.837 | | | | | Kuwait Floods | pa | c=0 | error_sensitive=TRUE | 0.904 | 0.904 | 0.904 | | Cairo Bombing | pa | c=2 | error_sensitive=TRUE | 0.714 | 0.708 | 0.708 | | Hafr Floods | l2sgd | c2=0.75 | ce=1e-6 | 0.861 | 0.861 | 0.859 | | Dragon Storms | pa | c=0 | error_sensitive=TRUE | 0.872 | 0.872 | 0.872 | | Beirut Explosion | l2sgd | c2=0.3 | ce=1e-3 | 0.701 | 0.703 | 0.701 | | CoVID-19 | pa | c=0 | error_sensitive=TRUE | 0.928 | 0.928 | 0.928 | | Time-based data setup Type-based LMR Jordan Floods arow variance=0.25 | gamma=0.1 | 0.776 | 0.778 | 0.775 | | | | Kuwait Floods | pa | c=0 | error_sensitive=TRUE | 0.891 | 0.891 | 0.891 | | Cairo Bombing | l2sgd | c2=0.05 | ce=1e-6 | 0.740 | 0.741 | 0.737 | | Hafr Floods | l2sgd | c2=0.05 | ce=1e-6 | 0.882 | 0.883 | 0.882 | | Dragon Storms | pa | c=0 | error_sensitive=TRUE | 0.880 | 0.880 | 0.880 | | Beirut Explosion | arow | variance=0.5 | gamma=0.16 | 0.617 | 0.643 | 0.621 | | CoVID-19 | pa | c=0 | error_sensitive=TRUE | 0.901 | 0.901 | 0.901 | Table 7: The best hyper-parameters and results for CRF model over IDRISI-RA. The column "Algo." refers to the training algorithm of CRF. The "HP1" and "HP2" refer to the tuned hyper-parameters with respect to the algorithm. ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ![16_image_2.png](16_image_2.png) Type-lessJordan Floods 3 8 3e-5 256 0.954 0.957 0.953 3 8 3e-5 128 0.911 0.916 0.903Kuwait Floods 4 16 3e-5 256 0.935 0.925 0.928 3 16 3e-5 128 0.895 0.905 0.893Cairo Bombing 3 8 3e-5 256 0.995 0.986 0.989 2 8 3e-5 128 0.934 0.939 0.936Hafr Floods | 4 8 3e-5 128 0.883 0.883 0.879 4 8 3e-5 256 0.878 0.897 0.878 Dragon Storms 3 8 3e-5 128 0.878 0.873 0.870 4 8 4e-5 128 0.882 0.868 0.869 Beirut Explosion 4 8 3e-5 128 0.885 0.851 0.855 4 8 3e-5 256 0.601 0.611 0.582 CoVID-19 3 8 3e-5 128 0.889 0.884 0.881 3 8 3e-5 256 0.896 0.914 0.897 | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Type-based Jordan Floods 4 8 3e-5 128 0.916 0.907 0.908 4 8 3e-5 256 0.880 0.872 0.862 Kuwait Floods 4 8 3e-5 128 0.933 0.925 0.925 3 8 3e-5 256 0.874 0.892 0.879 Cairo Bombing 4 8 3e-5 128 0.984 0.970 0.975 4 8 3e-5 256 0.930 0.935 0.931 Hafr Floods 4 8 3e-5 256 0.870 0.857 0.856 3 8 3e-5 256 0.841 0.854 0.838 Dragon Storms 4 8 3e-5 256 0.798 0.789 0.787 4 8 3e-5 256 0.726 0.722 0.714 Beirut Explosion 4 8 4e-5 256 0.854 0.821 0.813 4 8 5e-5 128 0.616 0.635 0.596 CoVID-19 4 8 3e-5 256 0.895 0.898 0.893 4 8 3e-5 128 0.888 0.898 0.886 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 8: The best hyper-parameters and results for the BERT-based model over IDRISI-RA.e, bs, lr, and sl refer to the hyper-parameters, number of epochs, batch size, learning rate, and sequence length, respectively. ![16_image_3.png](16_image_3.png) | Source-Target | e | bs | lr | sl | P | R | F1 | |--------------------|-----|------|------|------|-------|-------|-------| | Type-less BMB-BMB | 4 | 8 | 3e-5 | 128 | 0.935 | 0.917 | 0.918 | | BMB-FLD | 4 | 8 | 3e-5 | 128 | 0.584 | 0.664 | 0.596 | | BMB-PND | 4 | 8 | 3e-5 | 128 | 0.854 | 0.840 | 0.839 | | BMB-STR | 4 | 8 | 3e-5 | 128 | 0.839 | 0.833 | 0.831 | | FLD-BMB | 3 | 8 | 3e-5 | 256 | 0.843 | 0.762 | 0.779 | | FLD-FLD | 3 | 8 | 3e-5 | 256 | 0.940 | 0.928 | 0.930 | | FLD-PND | 3 | 8 | 3e-5 | 256 | 0.898 | 0.887 | 0.887 | | FLD-STR | 3 | 8 | 3e-5 | 256 | 0.839 | 0.819 | 0.826 | | PND-BMB | 3 | 8 | 3e-5 | 128 | 0.526 | 0.524 | 0.488 | | PND-FLD | 3 | 8 | 3e-5 | 128 | 0.686 | 0.744 | 0.687 | | PND-PND | 3 | 8 | 3e-5 | 128 | 0.889 | 0.884 | 0.881 | | PND-STR | 3 | 8 | 3e-5 | 128 | 0.749 | 0.716 | 0.728 | | STR-BMB | 3 | 8 | 3e-5 | 128 | 0.574 | 0.535 | 0.518 | | STR-FLD | 3 | 8 | 3e-5 | 128 | 0.491 | 0.544 | 0.501 | | STR-PND | 3 | 8 | 3e-5 | 128 | 0.792 | 0.768 | 0.773 | | STR-STR | 3 | 8 | 3e-5 | 128 | 0.878 | 0.873 | 0.870 | | Type-based BMB-BMB | 4 | 8 | 3e-5 | 256 | 0.972 | 0.934 | 0.945 | | BMB-FLD | 4 | 8 | 3e-5 | 256 | 0.396 | 0.505 | 0.422 | | BMB-PND | 4 | 8 | 3e-5 | 256 | 0.876 | 0.859 | 0.858 | | BMB-STR | 4 | 8 | 3e-5 | 256 | 0.798 | 0.785 | 0.786 | | FLD-BMB | 3 | 8 | 3e-5 | 256 | 0.850 | 0.763 | 0.786 | | FLD-FLD | 3 | 8 | 3e-5 | 256 | 0.937 | 0.935 | 0.933 | | FLD-PND | 3 | 8 | 3e-5 | 256 | 0.854 | 0.846 | 0.842 | | FLD-STR | 3 | 8 | 3e-5 | 256 | 0.855 | 0.838 | 0.842 | | PND-BMB | 4 | 8 | 3e-5 | 256 | 0.513 | 0.507 | 0.481 | | PND-FLD | 4 | 8 | 3e-5 | 256 | 0.603 | 0.703 | 0.622 | | PND-PND | 4 | 8 | 3e-5 | 256 | 0.895 | 0.898 | 0.893 | | PND-STR | 4 | 8 | 3e-5 | 256 | 0.781 | 0.743 | 0.752 | | STR-BMB | 4 | 8 | 3e-5 | 256 | 0.469 | 0.428 | 0.418 | | STR-FLD | 4 | 8 | 3e-5 | 256 | 0.406 | 0.466 | 0.421 | | STR-PND | 4 | 8 | 3e-5 | 256 | 0.743 | 0.715 | 0.72 | | STR-STR | 4 | 8 | 3e-5 | 256 | 0.798 | 0.789 | 0.787 | ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations ✓ A2. Did you discuss any potential risks of your work? Discussed two main risks: - Noisiness of SM data in Section 3.1 Design Objectives and Data Selection - Revealing individuals' (potentially vulnerable ones) information in Appendix A Data Release ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract 1 Introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 3 Dataset Construction 5.2 Learning Models ✓ B1. Did you cite the creators of artifacts you used? 5.2 Learning Models ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? 5.2 Learning Models Appendix A Data Release ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5.2 Learning Models ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Appendix A Data Release ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 4 Dataset Description and Quality ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3 Dataset Construction Appendix A Data Release C ✓ **Did you run computational experiments?** 5 Benchmarking Experiments 6 Dataset Generalizability ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5.3 Hyperparameter Tuning Appendix B Hyperparameters Tuning and Results The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5.1 Experimental Setup 5.3 Hyperparameter Tuning 6.1 Experimental Setups Appendix B Hyperparameters Tuning and Results ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5.5 Results 6.2 Results Appendix B Hyperparameters Tuning and Results We note that, as our aim in this paper is to provide baselines for the community and not to develop models that advance the SOTA, we extensively tune hyperparameters without repeating experiments with different seeds to report an anggregated (e.g., avg) results. ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5.2 Learning Models 5.4 Evaluation Measures We use default models and parameters ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 3.2 Dataset Annotation ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? 3.2 Dataset Annotation ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 3.2 Dataset Annotation ✗ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? The annotators are volunteering students and were informed that their annotations will be used for research purposes in the humanitarian domain. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. We collect, annotate, and share the data under Developer Agreement and Policy. No approval from other ethics review board. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 3.2 Dataset Annotation
peng-etal-2023-fsuie
{FSUIE}: A Novel Fuzzy Span Mechanism for Universal Information Extraction
https://aclanthology.org/2023.acl-long.902
Universal Information Extraction (UIE) has been introduced as a unified framework for various Information Extraction (IE) tasks and has achieved widespread success. Despite this, UIE models have limitations. For example, they rely heavily on span boundaries in the data during training, which does not reflect the reality of span annotation challenges. Slight adjustments to positions can also meet requirements. Additionally, UIE models lack attention to the limited span length feature in IE. To address these deficiencies, we propose the Fuzzy Span Universal Information Extraction (FSUIE) framework. Specifically, our contribution consists of two concepts: \textit{fuzzy span loss} and \textit{fuzzy span attention}. Our experimental results on a series of main IE tasks show significant improvement compared to the baseline, especially in terms of fast convergence and strong performance with small amounts of data and training epochs. These results demonstrate the effectiveness and generalization of FSUIE in different tasks, settings, and scenarios.
## Fsuie: A Novel Fuzzy Span Mechanism For Universal Information Extraction Tianshuo Peng1,†**, Zuchao Li**1,2,†,∗ , Lefei Zhang1,2, Bo Du1,2 **and Hai Zhao**3 1National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, P. R. China 2Hubei Luojia Laboratory, Wuhan 430072, P. R. China 3Department of Computer Science and Engineering, Shanghai Jiao Tong University {pengts,zcli-charlie,zhanglefei,dubo}@whu.edu.cn, zhaohai@cs.sjtu.edu.cn ## Abstract Universal Information Extraction (UIE) has been introduced as a unified framework for various Information Extraction (IE) tasks and has achieved widespread success. Despite this, UIE models have limitations. For example, they rely heavily on span boundaries in the data during training, which does not reflect the reality of span annotation challenges. Slight adjustments to positions can also meet requirements. Additionally, UIE models lack attention to the limited span length feature in IE. To address these deficiencies, we propose the Fuzzy Span Universal Information Extraction (FSUIE) framework. Specifically, our contribution consists of two concepts: *fuzzy span loss* and fuzzy span attention. Our experimental results on a series of main IE tasks show significant improvement compared to the baseline, especially in terms of fast convergence and strong performance with small amounts of data and training epochs. These results demonstrate the effectiveness and generalization of FSUIE in different tasks, settings, and scenarios. ## 1 Introduction Information Extraction (IE) is focused on extracting predefined types of information from unstructured text sources, such as Named Entity Recognition (NER), Relationship Extraction (RE), and Sentiment Extraction (SE). To uniformly model the various IE tasks under a unified framework, a generative Universal Information Extraction (UIE) was proposed in (Lu et al., 2022) and has achieved widespread success on various IE datasets and benchmarks. Due to the necessity of a powerful ∗Corresponding author. † Equal contribution. This work was supported by the Fundamental Research Funds for the Central Universities (No. 2042023kf0133), the Special Fund of Hubei Luojia Laboratory under Grant 220100014 and the National Science Fund for Distinguished Young Scholars under Grant 62225113. Hai Zhao was funded by the Key Projects of National Natural Science Foundation of China (U1836222 and 61733011). Annotator #1: ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) On the 3rd September evening, I saw a yellow sports car VEHICLE ... Annotator #3: ![0_image_2.png](0_image_2.png) On the 3rd September evening, I saw a yellow sports car VEHICLE ... Annotator #2: On the 3rd September evening, I saw a yellow sports car VEHICLE ... Figure 1: An example of annotation ambiguity for "vehicle" entity in sentence "*On the 3rd September evening,* I saw a yellow sports car drive past my house". generative pre-training model for the generative UIE, the time overhead is extensive and the efficiency is not satisfactory. For this reason, this paper examines span-based UIE to unify various IE tasks, conceptualizing IE tasks as predictions of spans. However, UIE models still have some limitations. First, as it is the process of training machine learning models to extract specific information from unstructured text sources, IE relies heavily on human annotation which involves labeling the data by identifying the specific information to be extracted and marking the corresponding span boundaries in the text manually. However, due to the complexity of natural language, determining the correct span boundaries can be challenging, leading to the phenomenon of annotation ambiguity. As shown in Figure 1, different annotated spans can be considered reasonable. In the span learning of UIE models, the method of teacher forcing is commonly used for loss calculation, making the model dependent on the precise span boundaries given in the training data. This can cause performance bottlenecks due to annotation ambiguity. When the model structure in UIE places too much emphasis on the exact boundaries of IE tasks, it leads to insufficient utilization of supervision information. In order to predict span boundaries, positions closer to the ground-truth should be more accurate than those relatively farther away, as shown 16318 in Figure 1. For example, words close to the target "car" are more likely to be correct than the word "evening" which is farther away from the target. Under the premise of positioning to the span where "car" is located, both the "yellow car" and the "yellow sports car" can be regarded as vehicle entities. This means that the span model learned should be fuzzy rather than precise. In addition, the use of pre-trained Transformer (Vaswani et al., 2017) in UIE to extract the start and end position representations also poses a problem. The Transformer model is designed to focus on the global representation of the input text, while UIE requires focusing on specific parts of the text to determine the span boundaries. This mismatch between the Transformer's focus on global representation and UIE's focus on specific parts of the text can negatively impact the performance of the model. When there is a mismatch between the Transformer architecture and the span representation learning, the model may not make good use of prior knowledge in IE. Specifically, given the start boundary (end boundary) of the label span, the corresponding end boundary (start boundary) is more likely to be found within a certain range before and after, rather than throughout the entire sequence. This is a prior hypotheses that span has limited length, which is ignored in the vanilla UIE model. To address this, a fuzzy span attention mechanism, rather than fixed attention, should be applied. In this paper, we propose the Fuzzy Span Universal Information Extraction (FSUIE) framework that addresses the limitations of UIE models by applying the fuzzy span feature, reducing overreliance on label span boundaries and adaptively adjusting attention span length. Specifically, to solve the issue of fuzzy boundaries, we design the *fuzzy span loss* that quantitatively represents the correctness information distributed on fuzzy span. At the same time, we introduce fuzzy span attention that sets the scope of attention to a fuzzy range and adaptively adjusts the length of span according to the encoding. We conduct experiments on various main IE tasks (NER, RE, and ASTE). The results show that our FSUIE has a significant improvement compared to the strong UIE baseline in different settings. Additionally, it achieves new state-of-the-art performance on some NER, RE, and ASTE benchmarks with only bert-base architecture, outperforming models with stronger pre-trained language models and complex neural designs. Furthermore, our model shows extremely fast convergence, and good generalization on lowresource settings. These experiments demonstrate the effectiveness and generalization of FSUIE in different tasks, settings, and scenarios. ## 2 Fsuie In FSUIE, incorporating fuzzy span into base UIE model involves two aspects. Firstly, for the spans carrying specific semantic types in the training data, the boundary targets should be learned as fuzzy boundaries to reduce over-reliance on span boundaries. To achieve this, we propose a novel *fuzzy* span loss. Secondly, during the span representation learning, the attention applied in span should be dynamic and of limited length, rather than covering the entire sequence. To achieve this, we propose a novel *fuzzy span attention*. ## 2.1 Fuzzy Span Loss (Fsl) The introduction of FSL is a supplement to traditional teacher forcing loss (usually implemented as Cross Entrophy), to guide the model in learning fuzzy boundaries. The challenge for FSL is how to quantify the distribution of correctness information within the fuzzy boundary. Specifically, for a given label span S, conventional target distributions (one-hot) indicate the correct starting and ending boundaries. This form actually follows the Dirac delta distribution that only focuses on the ground-truth positions, but cannot model the ambiguity phenomenon in boundaries. To address the challenge discussed above, we propose a fuzzy span distribution generator (FSDG). In our method, we use a probability distribution of span boundaries to represent the groundtruth, which is more comprehensive in describing the uncertainty of boundary localization. It consists of two main steps: 1) determining the probability density distribution function f; 2) mapping from the continuous distribution to a discrete probability distribution based on f. Specifically, let q ∈ S be a boundary of the label span, then the total probability value of its corresponding fuzzy boundary qˆcan be represented as follows: $$\hat{q}=\int_{R_{min}}^{R_{max}}xQ(x)dx,\quad q\in S\tag{1}$$ where $x$ represents the coordinate of boundaries. within the fuzzy range [Rmin, Rmax], Rmin and ![2_image_0.png](2_image_0.png) ... evening, I saw a yellow sports car drive past my house ... ... evening, I saw a yellow sports car drive past my house ... Rmax are the start and end positions of the fuzzy range, q gt represents the ground-truth position for boundary q, and Q(x) represents the corresponding coordinate probability. The traditional Dirac delta distribution can be viewed as a special case of Eq. (1), where Q(x) = 1 when x = q gt, and Q(x) = 0 otherwise. Through a mapping function F, we can quantifying continuous fuzzy boundaries into a unified discrete variable ˆq = [F(q1), F(q2), · · · , F(qn)] with n subintervals. [q1, q2, · · · , qn] represent continuous coordinates in fuzzy range where q1 = Rmin and qn = Rmax, the probability distribution of each given boundary of the label span can be represented within the range via the softmax function. Since the Dirac delta distribution only assigns non-zero probability to a single point, it is not suitable for modeling uncertainty or ambiguity in real-world data. Thus in FSUIE, we choose the Gaussian distribution N(*µ, σ*2) as the probability density function f. Compared with other probability distributions, the Gaussian distribution assigns non-zero probability to an entire range of values has the following advantages: (1) it is continuous, symmetrical, and can well represent the distribution of correctness information within the fuzzy boundary including the gold position; (2) it is a stable distribution with fewer peaks and offsets, and can ensure that the correctness information is more concentrated on the gold position while distributed on the fuzzy boundary; (3) the integral of the Gaussian distribution is 1, which can ensure that the accuracy distribution after softmax is more gentle. To get the discrete variable ˆq , Four parameters are involved here: variance σ, mean µ, sampling step s, and sampling threshold θ. These parameters are used to control the range, peak position, and density of the fuzzy boundary, respectively. Specifically, the parameter µ is set to q gt and the Gaussian distribution is determined using a pre-determined σ. Assuming qg ∈ [q1, q2, · · · , qn] = q gt, F can represented as: Loss = BCE(, ) $$\begin{array}{l}{{F(q_{i})=\left\{\begin{array}{l l}{{\varepsilon,}}&{{\varepsilon\geq\theta}}\\ {{0,}}&{{\varepsilon<\theta}}\end{array}\right.,}}\\ {{\varepsilon=f(\mu+(i-g)s).}}\end{array}$$ Loss = KL(, p) Given that values in the marginal regions of Gaussian distribution are quite small, the sampling threshold θ here acts as a filter to eliminate information from unimportant locations. The specific choice of parameters is discussed in the following experimental section. We use ˆq as the distribution of correctness information on the fuzzy boundaries. The beginning and end fuzzy boundaries together make up the fuzzy span. Then, we calculate the KL divergence between the model's predicted logits and the gold fuzzy span distribution as the *fuzzy* span loss. The exact boundary and fuzzy boundary distribution is shown in Figure 2. This fuzzy span loss is then incorporated into the original teacherforcing loss function with a coefficient: $$\begin{array}{c}{{{\mathcal L}_{F S}=D_{K L}(\hat{\mathbf{q}}\|p)=\sum_{i=1}^{N}\hat{\mathbf{q}}\left(x_{i}\right)\left(\log\frac{\hat{\mathbf{q}}\left(x_{i}\right)}{p\left(x_{i}\right)}\right),}}\\ {{{\mathcal L}={\mathcal L}_{o r i}+\lambda{\mathcal L}_{F S}}}\end{array}$$ where p represents the predicted distribution of the model and ˆq represents the generated fuzzy span distribution from FSDG according to the annotation in training data. Lori is the original Binary Cross Entropy (BCE) loss of the model in UIE, and λ is the coefficient of the *fuzzy span loss*. ## 2.2 Fuzzy Span Attention (Fsa) We construct a FSA based on a multi-head selfattention mechanism with relative positional encoding (RPE), since RPE is more suitable for span representation learning with fuzzy bounds. In conventional multi-head attention with RPE, for a token at position t in the sequence, each head computes the similarity matrix of this token and the tokens in the sequence. The similarity between token t and token r can be represented as: $$s_{t r}=y_{t}^{\top}W_{q}^{\top}\left(W_{k}y_{r}+p_{t-r}\right)\qquad\qquad(3)$$ where Wk and Wq are the weight matrices for "key" and "query" representations, yt and yr are the representations of token t and r, and pt−r is the relative position embedding, the corresponding attention weight can be obtained through a softmax function $$a_{t r}={\frac{\exp\left(s_{t r}\right)}{\sum_{q=0}^{t-1}\exp\left(s_{t q}\right)}}.\qquad\qquad{\mathrm{(4)}}$$ Conventional self-attention focus on global representations, mismatching the requirement of fuzzy spans. To address this issue, we present a novel attention mechanism, called *Fuzzy Span Attention* (FSA), to control attention scores of each token, aiming to learn a span-aware representation. The fuzzy span mechanism of FSA consists of two aspects: (1) the length of the range applying full attention is dynamically adjusted; and (2) the attention weights on the boundary of the full attention span are attenuating rather than truncated. Specifically, inspired by (Sukhbaatar et al., 2019), we design a mask function gm to control the attention score calculation. Assuming the maximum length of the possible attention span is L*span*, the new attention scores can be represented as: $$a_{t r}=\frac{g_{m}(t-r)\exp{(s_{t r})}}{\sum_{q=t-L_{s p a n}}^{t-1}g_{m}(t-r)\exp{(s_{t q})}}.$$ . (5) The following process is divided into two stages: (1) determining the attention changing function ga on the fuzzy span, and (2) constructing the mask function gm based on ga for span-aware representation learning. According to the characteristics of fuzzy span, we set ga as a monotonically decreasing linear function. To adjust the attention span length, we define a learnable parameter δ ∈ [0, 1]. The ga(x) and corresponding gm(x) can be represented as follows: $$\begin{array}{c c}{{g_{a}(z)=\frac{-z+l+d}{d},}}&{{}}\\ {{l=\delta L_{s p a n}.}}&{{}}\end{array}\qquad\qquad(6)$$ $$g_{m}(z)=\left\{\begin{array}{l l}{{1,}}&{{g_{a}(z)>1}}\\ {{0,}}&{{g_{a}(z)<0}}\\ {{g_{a}(z),}}&{{o t h e r w i s e}}\end{array}\right..\quad.\quad(7)$$ where l controls the length of the full attention range and d is a hyper-parameter that governs the length of the attenuated attention range. In Figure 3, an illustration of the gm function is depicted. The dashed lines represent alternative ![3_image_0.png](3_image_0.png) Non- **Attention** Range choices of ga functions, such as $$g_{a}^{\prime}(z)=\left\{\begin{array}{l l}{{1,}}&{{z\leq l}}\\ {{0,}}&{{z>l}}\end{array}\right.,$$ $$g_{a}^{\prime\prime}(z)=\left\{\begin{array}{l l}{{1,}}&{{z\leq l}}\\ {{\frac{1}{\sqrt{2\pi}\cdot\frac{d}{3}}\exp\left(-\frac{(z-l)^{2}}{2(\frac{d}{3})^{2}}\right),}}&{{z>l}}\end{array}\right..$$ $$({\mathfrak{S}})$$ Through experimentation, we found that the linear attenuated function performs best (refer to comparison in Appendix A). Iterative optimization of δ allows the model to learn the optimal attention span lengths for a specific task. It is important to note that different heads learn the attention span length independently and thus obtain different optimal fuzzy spans. In our implementation, instead of using multiple layers of fuzzy span attention layers, we construct the span-aware representation with a single fuzzy span attention layer on top of Transformer encoder, and it does not participate in the encoding process. Therefore, although the maximum range of fuzzy span attention is limited by L*span*, it only affects span decisions and does not have any impact on the representation of tokens in the sequence. ## 3 Experiments 3.1 Setup Tasks We conducted experiments on 4 datasets for 3 common information extraction tasks: NER, RE, and ASTE. The datasets we used include ACE2004, ACE2005, ADE (Gurulingappa et al., 2012) for NER and RE tasks, and ASTE-DataV2 (Xu et al., 2020) for ASTE task. We evaluate our model using different metrics for the three IE tasks. For NER, we use the Entity F1 score, in | Models | PLM | ACE04 | ACE05 | ADE | | | | | | | |---------------------------------------------|----------------|---------|---------|-------|-------|-------|-------|-------|-------|-------| | P | R | F1 | P | R | F1 | P | R | F1 | | | | BENSC (Tan et al., 2020) | BERT-base | 85.80 | 84.80 | 85.30 | 83.80 | 83.90 | 83.90 | - | - | - | | (Ma et al., 2020) | BERT-large | - | - | - | - | - | 88.60 | - | - | - | | SpERT (Eberts and Ulges, 2019) | BERT-base | - | - | - | - | - | - | 88.69 | 89.20 | 88.95 | | SpERT.PL (Santosh et al., 2021) | Bio-BERT | - | - | - | - | - | - | 90.05 | 91.69 | 90.86 | | BoningKnife (Jiang et al., 2021) | BERT-base | 85.98 | 86.86 | 86.41 | 84.77 | 86.16 | 85.46 | - | - | - | | JCBIE (He et al., 2022) | Bio-BERT | - | - | - | - | - | - | - | - | 87.80 | | GLOBAL POINTER (Su et al., 2022) | BERT-base | - | - | - | - | - | - | - | - | 90.10 | | BS (Zhu and Li, 2022) | RoBERTa-base | 88.43 | 87.53 | 87.98 | 86.25 | 88.07 | 87.15 | - | - | - | | Triaffine (Yuan et al., 2022) | BERT-large | 87.13 | 87.68 | 87.40 | 86.70 | 86.94 | 86.82 | - | - | - | | Triaffine (Yuan et al., 2022) | ALBERT-xxlarge | 88.88 | 88.24 | 88.56 | 87.39 | 90.31 | 88.83 | - | - | - | | (Yan et al., 2021) | BERT-large | 87.27 | 86.41 | 86.84 | 83.16 | 86.38 | 84.74 | - | - | - | | Generative UIE (SEL only) (Lu et al., 2022) | T5-v1.1-large | - | - | 86.52 | - | - | 85.52 | - | - | - | | Generative UIE (Lu et al., 2022) | T5-v1.1-large | - | - | 86.89 | - | - | 85.78 | - | - | - | | UIE-base | BERT-base | 88.25 | 80.31 | 84.09 | 86.19 | 83.12 | 84.63 | 87.85 | 91.56 | 89.67 | | FSUIE-base | BERT-base | 85.67 | 84.82 | 85.24 | 87.05 | 85.40 | 86.22 | 91.17 | 92.17 | 92.49 | | FSUIE-large | BERT-large | 86.15 | 86.17 | 86.16 | 88.06 | 85.79 | 86.91 | 93.82 | 92.21 | 93.08 | which an entity prediction is correct if its span and type match a reference entity. For RE, we use the Relation Strict F1 score, where a relation is considered correct only if its relation type and related entity spans are all correct. For ASTE, we use the Sentiment Triplet F1 score, where a triplet is considered correct if the aspect, opinion, and sentiment polarity are all correctly identified. Training Details We trained two variations of FSUIE, FSUIE-base and FSUIE-large, which are based on the BERT-base and BERT-large model architecture and pre-training parameters respectively. In addition, we also trained a UIE-base based on BERT-base as a baseline without using FSL and FSA layers. In FSUIE, we added the FSA layer and the span boundary prediction layer to both models. Specifically, FSUIE-base has 12 layers of 12-head Transformer layers, with a hidden size of 768, while FSUIE-large has 24 layers of 16-head Transformer layers, with a hidden size of 1024. During training, we set the parameters of the Gaussian distribution in FSL as σ = 0.5, the distribution value truncation threshold θ to 0.3, sampling step s to 0.3, and the loss coefficient λ to 0.01. And the parameter µ is set to the coordinate of annotation boundary. The hyper-parameters L*span* and d involved are determined based on the statistics of the target length on the UIE training data. During training, we set L*span* to 30 and d to 32, and experimentation results have shown that the model's performance is not significantly sensitive to the choice of these hyper-parameters (refer to comparison in Appendix C). We trained both models for 50 epochs with a learning rate of 1e-5 on the datasets of each task, and selected the final model based on the performance on the development set. The code is available at https://github.com/pengts/FSUIE. ## 3.2 Results On Ner Tasks We report the results of NER task in Table 1. By comparing the results of our baseline UIE-base with other methods, it can be seen that UIE-base has achieved comparable results compared to other methods that use the same BERT-base architecture. It serves as a strong baseline to visually demonstrate enhancements made by FSL and FSA. By introducing FSL and FSA, our FSUIE-base achieves significant performance improvements over the UIE-base that does not have fuzzy span mechanism (+1.15, +1.59, +1.99 F1 scores). Our proposed FSUIE model shows the most significant improvement on the ADE dataset. This is primarily due to the smaller scale of training datasets in the ADE dataset, which allows the model to easily learn generalized fuzzy span-aware representations. This demonstrates the superiority of the FSUIE model. FSL and FSA enable the model to reduce overdependence on label span boundaries and learn span-aware representations. When compared to existing NER models, FSUIE achieves new stateof-the-art performance on the ADE dataset even with the BERT-base backbone. FSUIE-large even achieve significant improvement (+1.42) on FSUIEbase. FSUIE-large also achieves comparable results on the ACE04 and ACE05 datasets, even when compared to models using stronger pre-trained language models such as ALBERT-xxlarge. Furthermore, our FSUIE demonstrates an advantage in | Models | PLM | ACE04 | ACE05 | ADE | | | | | | | |---------------------------------------------|----------------|---------|---------|-------|-------|-------|-------|-------|-------|-------| | P | R | F1 | P | R | F1 | P | R | F1 | | | | SpERT (Eberts and Ulges, 2019) | BERT-base | - | - | - | - | - | - | 77.77 | 79.96 | 78.84 | | SpERT.PL (Santosh et al., 2021) | Bio-BERT | - | - | - | - | - | - | 80.11 | 84.18 | 82.03 | | JCBIE (He et al., 2022) | Bio-BERT | - | - | - | - | - | - | - | - | 74.18 | | (Ma et al., 2020) | BERT-base | - | - | - | - | - | 66.10 | - | - | - | | (Ma et al., 2020) | BERT-large | - | - | - | - | - | 68.10 | - | - | - | | Tabel-Sequence Encoder (Wang and Lu, 2020) | ALBERT-xxlarge | - | - | 63.30 | - | - | 67.60 | - | - | 80.10 | | PL-Marker (Ye et al., 2022) | BERT-base | - | - | 66.70 | - | - | 69.00 | - | - | - | | PL-Marker (Ye et al., 2022) | ALBERT-xxlarge | - | - | 69.70 | - | - | 73.00 | - | - | - | | Generative UIE (SEL only) (Lu et al., 2022) | T5-v1.1-large | - | - | - | - | - | 64.68 | - | - | - | | Generative UIE (Lu et al., 2022) | T5-v1.1-large | - | - | - | - | - | 66.06 | - | - | - | | UIE-base | BERT-base | 88.73 | 53.46 | 66.72 | 95.38 | 52.04 | 67.34 | 67.61 | 56.31 | 61.45 | | FSUIE-base | BERT-base | 91.78 | 58.99 | 71.82 | 96.79 | 57.69 | 72.29 | 91.10 | 75.87 | 82.79 | | FSUIE-large | BERT-large | 89.01 | 61.13 | 72.48 | 98.84 | 59.34 | 74.16 | 92.02 | 78.23 | 84.57 | terms of its structure prediction compared to the generative UIE model. As it does not require the generation of complex IE linearized sequences, our FSUIE-base, which only uses BERT-base as its backbone, outperforms the generative UIE model that uses T5-v1.1-large on the ACE05 dataset. ## 3.3 Results On Re Tasks In Table 2, we present the results of the RE tasks. Compared to the baseline, UIE-base, which does not incorporate fuzzy span mechanism, our proposed FSUIE-base, which incorporates FSL and FSA, also achieves a significant improvement on the RE task using same backtone. Furthermore, when compared to the Table-Sequence Encoder approach (Wang and Lu, 2020), our method learns label span boundary distribution and span-aware representations, resulting in optimal or competitive results on the RE task even with FSUIE-base, despite using a simpler structure and smaller PLM backbones. Compared to span-based IE models, our method outperforms the traditional joint extraction model by performing a two stage span extraction and introducing the fuzzy span mechanism. Specifically, on the ADE dataset, our method performs better than joint extraction methods using Bio-BERT, a domain-specific pre-trained language model on biomedical corpus, even using BERT-base as the pre-trained language model. This demonstrates that the fuzzy span mechanism we introduced can extract general information from the data, giving the model stronger information extraction capabilities, rather than simply fitting the data. Compare to generative UIE models, our spanbased FSUIE reflects the reality of the structure of IE task and does not require additional sequence generation structures, achieving higher results with less parameters even with FSUIE-base. Compared to models that perform relation extraction using a pipeline approach, like PL-Marker, our FSUIE improves performance in both stages of the pipeline by introducing FSL and FSA. As a result, it results in an overall improvement in relation extraction. Additionally, our model achieves new state-of-theart results on ACE04 and ADE datasets,even using only BERT-base as the backbone, and on ACE05 dataset with FSUIE-large, compared to other models that use more complex structures. This demonstrates the model's ability to effectively extract information through our proposed method. ## 3.4 Results On Aste Tasks In Table 3, we present the results of our experiments on the ASTE task. Due to the small scale of the ASTE-Data-V2, FSUIE-large is not needed to achieve better results, and this section only uses FSUIE-base for comparison. It can be seen that by introducing the fuzzy span mechanism, our FSUIE model significantly improves ASTE performance compared to the baseline UIE-base. This also demonstrates the effectiveness and generalization ability of FSUIE in IE tasks. Additionally, our FSUIE-base model achieves state-of-the-art results on three datasets (14lap, 15res, 16res) and demonstrates competitive performance on the 14res dataset. This indicates that the fuzzy span mechanism is effective in improving the model's ability to exploit and extract information, as well as its performance on specific tasks without increasing model parameters. Furthermore, our FSUIE model has a relatively simple architecture, compared to other models, which shows that FSUIE is able to improve per- | Models | PLM | 14lap | 14res | 15res | 16res | | | | | | | | | |---------------------------------------------|---------------|---------|-------------------|---------|---------|-------|-------------------|-------------|-------|-------|-------|-------|-------| | P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 | | | | JET (Xu et al., 2020) | BERT-base | 55.39 | 47.33 | 51.04 | 70.56 | 55.94 | 62.40 | 64.45 | 51.96 | 57.53 | 70.42 | 58.37 | 63.83 | | Dual-MRC (Mao et al., 2021) | BERT-base | 57.39 | 53.88 | 55.58 | 71.55 | 69.14 | 70.32 | 63.78 | 51.87 | 57.21 | 68.60 | 66.24 | 67.40 | | ASTE-RL (Mao et al., 2021) | BERT-base | 64.80 | 54.99 | 59.50 | 70.60 | 68.65 | 69.61 | 65.45 | 60.29 | 62.72 | 67.21 | 69.69 | 68.41 | | GAS (Zhang et al., 2021) | BERT-base | - | - | 60.78 | - | - | 72.16 | - | - | 62.10 | - | - | 70.10 | | Span-ASTE (Xu et al., 2021) | BERT-base | 63.44 | 55.84 | 59.38 | 72.89 | 70.89 | 71.85 | 62.18 | 64.45 | 63.27 | 69.45 | 71.17 | 70.26 | | SBN (Chen et al., 2022) | BERT-base | 65.68 | 59.88 | 62.65 | 76.36 | 72.43 | 74.34 | 69.93 | 60.41 | 64.82 | 71.59 | 72.57 | 72.08 | | Generative UIE (SEL only) (Lu et al., 2022) | T5-v1.1-large | - | - | 63.15 | - | - | 73.78 | - | - | 66.10 | - | - | 73.87 | | Generative UIE (Lu et al., 2022) | T5-v1.1-large | - | - | 63.88 | - | - | 74.52 | - | - | 67.15 | - | - | 75.07 | | UIE-base | BERT-base | 65.21 | 57.64 | 61.19 | 75.32 | 71.53 | 73.38 | 71.11 | 65.98 | 68.45 | 74.84 | 70.04 | 72.36 | | FSUIE-base | BERT-base | 69.49 | 62.06 65.56 76.22 | 72.23 | 74.17 | 72.71 | 68.66 70.63 77.98 | 73.74 75.80 | | | | | | formance without the need for complex structures. The gap in performance between UIE models and other models can be attributed, in part, to the advantage of UIE pre-training, which is further enhanced by our proposed fuzzy span mechanism. Compared to models that decompose the ASTE task into two subtasks of opinion recognition and sentiment classification, and use separate models to handle each, our FSUIE model achieved better performance using a unified model architecture. For ASTE, span-based UIE models, as opposed to generative UIE models, can leverage the complete semantic information of the predicted aspect span to assist in extracting opinions and sentiments. The fuzzy span mechanism enhances the model's ability to exploit the semantic information within the fuzzy span, where possible opinions and sentiments reside, while ensuring span-aware representation learning, resulting in significant improvements.Furthermore, FSUIE is a reaction to the real structure of IEtask, avoiding the extra parameters that sequence generation structures bring, and therefore outperforms generative UIE models with fewer parameters. We notice that FSUIE improves relatively less on the RE task compared to the ASTE task. In the RE task, the model has to learn different entities, different types of relationships, and binary matching skills. In contrast, in ASTE tasks, the model only needs to learn different entities, two relationships that differ significantly in semantics (opinion and sentiment), and ternary pairing tips. From this perspective, RE tasks are more challenging than ASTE tasks. ## 3.5 Results On Low-Resource Settings To demonstrate the robustness of our proposed FSUIE method in low-resource scenarios, we conducted experiments using a reduced amount of Entity F1 1% 5% 25% 100% UIE-base 63.47 72.98 83.08 84.63 FSUIE-base 70.09 77.20 83.49 85.22 Relation Strict F1 1% 5% 25% 100% UIE-base 6.68 45.55 64.43 66.72 FSUIE-base 9.73 53.44 66.08 71.82 Sentiment Triplet F1 1% 5% 25% 100% UIE-base 45.66 63.12 73.73 73.38 FSUIE-base 46.87 63.79 74.27 74.17 training data on ACE04 for NER and RE tasks, and 14res for ASTE task. Specifically, we created three subsets of the original training data at 1%, 5%, and 25% of the original size. In each lowresource experiment, we trained the model for 200 epochs instead of 50 epochs. The results of these experiments were compared between FSUIE-base and UIE-base and are presented in Table 4. The results of the low-resource experiments further confirm the superior performance of FSUIE over UIE in handling low-resource scenarios. With only a small fraction of the original training data, FSUIE is still able to achieve competitive or even better performance than UIE. This demonstrates the robustness and generalization ability of FSUIE in dealing with limited data. Overall, the results of the low-resource experiments validate the ability of FSUIE to effectively handle low-resource scenarios and extract rich information through limited data. We also found that the model both performed better on NER and ASTE taks than on RE task under low-resource settings. This is because NER and ASTE tasks are simpler than RE, so less data can bring better learning performance. Additionally, we noticed a small performance decrease in the ASTE task for the 100% set compared to the ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) 25% set. This change may be due to the fact that the training data is unbalanced, and reducing the training size can alleviate this phenomenon. ## 3.6 Ablation Study Since FSUIE has been verified to make more effective use of the information in the training set, in order to verify this, we verify it from the perspective of the model training process. Specifically, we recorded the effects of baseline UIE-base, UIEbase+FSL, UIE-base+FSA and full model FSUIEbase on different training steps on the NER ACE04 test set, and the results are shown in Figure 4. We noticed that the models with FSA have a significantly faster convergence speed, indicating that by learning span-aware representations, which are closer to the span prediction goal, the span learning process becomes more easy and efficient. With FSA, the model can focus its attention on the necessary positions and capture the possible span within a given sequence. While for FSL, it have a similar convergence trend with the baseline, thus may not improve the convergence speed. | Models | P | R | F1 | |----------------|-------|-------|-------| | UIE-base | 87.85 | 91.56 | 89.67 | | UIE-base + FSL | 89.61 | 90.58 | 90.09 | | UIE-base + FSA | 89.21 | 90.09 | 89.65 | | FSUIE-base | 91.17 | 92.17 | 92.49 | To further investigate the contribution of FSL and FSA to the improvement of model performance, we conduct ablation experiments on the NER task using the ADE dataset. The specific experimental results are shown in Table 5. It can be seen that the introduction of FSL alone can improve model performance individually. When using FSA alone, the performance of the model drops slightly. However, when both FSL and FSA are used together, the model is significantly enhanced. From our perspectives, the separate introduction of FSA makes the model focus on specific parts of the sequence rather than global representation, resulting in a loss of information from text outside the span. This may explain the slight drop in performance when using UIE+FSA. However, this also demonstrates that in the IE task, sequence information outside a specific span has a very limited impact on the results. The introduction of FSL alleviates the model's over-dependence on label span boundaries, allowing the model to extract more information, resulting in an improvement in both settings. When FSA and FSL operate simultaneously, the model extracts more information from the text and FSA guides the model to filter the more critical information from the richer information, resulting in the most substantial improvement. ## 3.7 Visualization Of Fsa To further examine the effectiveness of the fuzzy span mechanism, we visualized the attention distribution of the FSA layer in FSUIE-large as shown in Figure 5. It should be noted that FSA is only placed at the top layer for constructing span-aware representation and does not participate in the encoding process, thus only affects span decisions rather than the representation of tokens in the sequence. The attention distribution indicates that, for a given input text, each token in the final encoding sequence tends to focus on semantic information within a limited range of preceding tokens rather than on the global representation of the input text. This aligns with our expectation for the design of the fuzzy span mechanism and confirms that fuzzy span mechanism does indeed guide the model for appropriate attention distribution for IE tasks. ## 4 Related Work Universal Model Building universal model structures for a wide range of NLP tasks has been a hot research area in recent years. The focus is on building model structures that can be adapted to different sources of data, different types of labels, different languages, and different tasks. Several universal models have been proposed, such as models learning deep contextualized word representations (Peters et al., 2018; Devlin et al., 2019), event extraction models that can predict different labels universally (Lu et al., 2021), models that can handle multiple languages (Arivazhagan et al., 2019; Aharoni et al., 2019; Conneau et al., 2020), a universal fine-tuned approach to transfer learning (Howard and Ruder, 2018), models that learning syntactic dependency structure over many typologically different languages (Li et al., 2018; Sun et al., 2020) and models that can universally model various IE tasks in a unified text-to-structure framework (Lu et al., 2022). This paper builds upon the UIE by incorporating the fuzzy span mechanism to improve IE performance. Information Extraction IE is the task of extracting structured information from unstructured text data. This includes NER, RE, ASTE, Event Extraction (EE), Aspect-Based Sentiment Analysis (ABSA), etc. Research has proposed numerous approaches for IE, such as rule-based (Appelt and Onyshkevych, 1998), machine learning (TéllezValero et al., 2005; Kolya et al., 2010), deep learning (Qin et al., 2018), active learning (Radmard et al., 2021), and logic fusion (Wang and Pan, 2020). There are still many task-specific models being proposed, based on previous approaches and structures, e.g., NER (Li et al., 2022; Zhu and Li, 2022; Yang et al., 2022; Zhang et al., 2019); RE (Nan et al., 2020; Lai et al., 2021), ABSA (Jing et al., 2021); and ASTE (Xu et al., 2021, 2020). More related work about sparse attention please refer to Appendix B. ## 5 Conclusion In this paper, we proposed the Fuzzy Span Universal Information Extraction (FSUIE) framework, an improvement for Universal Information Extraction. To make use of boundary information in the training data and learn a decision-closer span-aware representation, we proposed a fuzzy span loss and fuzzy span attention. Extensive experiments on several main IE tasks show that our FSUIE has a significant improvement compared to the UIE baseline, and achieves state-of-the-art results on ADE NER datasets, ACE04 RE, ACE05 RE and ADE RE datasets and four ASTE datasets. The experiments also reveal FSUIE's fast convergence and good generality in low-resource settings. All the results demonstrate the effectiveness and generalizability of our FSUIE in information extraction. ## 6 Limitations This paper are based on the assumption that Universal Information Extraction (UIE) models have limitations, particularly with regards to over-reliance on label span boundaries and inflexible attention span length. Therefore, the proposed framework may be computationally and spatially expensive as it requires a more complex attention mechanism and additional computing power for training. Nevertheless, this limitation of the span-based UIE model can be overlooked in comparison to that of the generative UIE model, which uses a stronger language model. Additionally, the probability density functions explored in FSL are limited; thus, further research is needed to develop a more targeted strategy for adjusting the correct information distribution. ## References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Douglas E. Appelt and Boyan Onyshkevych. 1998. The common pattern specification language. In TIPSTER ## Text Program Phase Iii: Proceedings Of A Workshop Held At Baltimore, Maryland, October 13-15, 1998, Pages 23–30, Baltimore, Maryland, Usa. Association For Computational Linguistics. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. CoRR, abs/1907.05019. Yuqi Chen, Keming Chen, Xian Sun, and Zequn Zhang. 2022. A span-level bidirectional network for aspect sentiment triplet extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. CoRR, abs/1904.10509. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440–8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Markus Eberts and Adrian Ulges. 2019. Span-based joint entity and relation extraction with transformer pre-training. CoRR, abs/1909.07755. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drugrelated adverse effects from medical case reports. J. Biomed. Informatics, 45(5):885–892. Kai He, Rui Mao, Tieliang Gong, Erik Cambria, and Chen Li. 2022. Jcbie: a joint continual learning neural network for biomedical information extraction. BMC bioinformatics, 23(1):1–20. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Huiqiang Jiang, Guoxin Wang, Weile Chen, Chengxi Zhang, and Börje F. Karlsson. 2021. Boningknife: Joint entity mention detection and typing for nested NER via prior boundary knowledge. CoRR, abs/2107.09429. Hongjiang Jing, Zuchao Li, Hai Zhao, and Shu Jiang. 2021. Seeking common but distinguishing difference, a joint aspect-based sentiment analysis model. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3910–3922, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Anup Kumar Kolya, Asif Ekbal, and Sivaji Bandyopadhyay. 2010. A supervised machine learning approach for event-event relation identification. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, pages 447– 454, Tohoku University, Sendai, Japan. Institute of Digital Enhancement of Cognitive Processing, Waseda University. Tuan Lai, Heng Ji, ChengXiang Zhai, and Quan Hung Tran. 2021. Joint biomedical entity and relation extraction with knowledge-enhanced collective inference. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6248–6260, Online. Association for Computational Linguistics. Zhuoran Li, Chunming Hu, Xiaohui Guo, Junfan Chen, Wenyi Qin, and Richong Zhang. 2022. An unsupervised multiple-task and multiple-teacher model for cross-lingual named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 170–179, Dublin, Ireland. Association for Computational Linguistics. Zuchao Li, Shexia He, Zhuosheng Zhang, and Hai Zhao. 2018. Joint learning of POS and dependencies for multilingual Universal Dependency parsing. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 65–73, Brussels, Belgium. Association for Computational Linguistics. Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, and Shaoyi Chen. 2021. Text2Event: Controllable sequenceto-structure generation for end-to-end event extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2795–2806, Online. Association for Computational Linguistics. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin, Ireland. Association for Computational Linguistics. Youmi Ma, Tatsuya Hiraoka, and Naoaki Okazaki. 2020. Named entity recognition and relation extraction using enhanced table filling by contextualized representations. CoRR, abs/2010.07522. Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13543–13551. AAAI Press. Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1546–1557, Online. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237, New Orleans, Louisiana. Association for Computational Linguistics. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2137–2147, Melbourne, Australia. Association for Computational Linguistics. Puria Radmard, Yassir Fathullah, and Aldo Lipani. 2021. Subsequence based deep active learning for named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4310–4321, Online. Association for Computational Linguistics. TYSS Santosh, Prantika Chakraborty, Sudakshina Dutta, Debarshi Kumar Sanyal, and Partha Pratim Das. 2021. Joint entity and relation extraction from scientific documents: Role of linguistic information and entity types. Jianlin Su, Ahmed Murtadha, Shengfeng Pan, Jing Hou, Jun Sun, Wanwei Huang, Bo Wen, and Yunfeng Liu. 2022. Global pointer: Novel efficient spanbased approach for named entity recognition. CoRR, abs/2208.03054. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 331–335, Florence, Italy. Association for Computational Linguistics. Kailai Sun, Zuchao Li, and Hai Zhao. 2020. Cross-lingual universal dependency parsing only from one monolingual treebank. arXiv preprint arXiv:2012.13163. Chuanqi Tan, Wei Qiu, Mosha Chen, Rui Wang, and Fei Huang. 2020. Boundary enhanced neural span classification for nested named entity recognition. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9016–9023. AAAI Press. Alberto Téllez-Valero, Manuel Montes-y-Gómez, and Luis Villaseñor Pineda. 2005. A machine learning approach to information extraction. In Computational Linguistics and Intelligent Text Processing, 6th International Conference, CICLing 2005, Mexico City, Mexico, February 13-19, 2005, Proceedings, volume 3406 of Lecture Notes in Computer Science, pages 539–547. Springer. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with table-sequence encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706–1721, Online. Association for Computational Linguistics. Wenya Wang and Sinno Jialin Pan. 2020. Integrating deep learning with logic fusion for information extraction. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9225–9232. AAAI Press. Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755–4766, Online. Association for Computational Linguistics. Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020. Position-aware tagging for aspect sentiment triplet extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2339–2349, Online. Association for Computational Linguistics. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5808–5822, Online. Association for Computational Linguistics. Yifei Yang, Zuchao Li, and Hai Zhao. 2022. Nested named entity recognition as corpus aware holistic structure parsing. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 2472–2482. International Committee on Computational Linguistics. Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. 2022. Packed levitated marker for entity and relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4904– 4917, Dublin, Ireland. Association for Computational Linguistics. Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3174– 3186, Dublin, Ireland. Association for Computational Linguistics. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Advancesin Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 504–510, Online. Association for Computational Linguistics. Zhuosheng Zhang, Bingjie Tang, Zuchao Li, and Hai Zhao. 2019. Modeling named entity embedding distribution into hypersphere. CoRR, abs/1909.01065. Enwei Zhu and Jinpeng Li. 2022. Boundary smoothing for named entity recognition. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7096–7108, Dublin, Ireland. Association for Computational Linguistics. Yimeng Zhuang, Jing Zhang, and Mei Tu. 2022. Longrange sequence modeling with predictable sparse attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 234–243, Dublin, Ireland. Association for Computational Linguistics. ## 7 Appendix 7.1 Ga **In Fsa** | P | R | F1 | | |--------------------|-------|-------|-------| | UIE-base | 87.85 | 91.56 | 89.67 | | FSUIE-base (g ′ a) | 89.84 | 90.69 | 90.26 | | FSUIE-base (g a ) | 90.93 | 90.41 | 90.67 | | ′′ | | | | | FSUIE-base (g a) | 91.17 | 92.17 | 92.49 | | l | | | | In Table 6, we present the performance of models using various ga functions in the FSUIE technique on the ADE NER test set, where g la denotes the linear attenuated function employed in FSUIE. Compared to the UIE-base, which does not integrate the fuzzy span mechanism, all FSUIE-based models employing different ga functions obtain better results, thus illustrating the superiority of FSUIE. Regarding the different ga strategies, FSUIE-base (g′a ) shows minimal enhancement. This is likely because the fuzzy span of attention attenuation adequately reflects the real reading context and enables the model to take advantage of more abundant information within the boundary of the attention span. The best performance is achieved by FSUIE-base (g la ), which indicates that the attention should not decay too quickly at the boundary of the attention span, as evidenced by the results of g′′ a . ## 7.2 Related Work On Sparse Attention The high time and space complexity of Transformer (O(n 2)) is due to the fact that it needs to calculate the attention information between each step and all previous contexts. This makes it difficult for Transformer to scale in terms of sequence length. To address this issue, sparse attention was proposed (Child et al., 2019). This refers to attention mechanisms that focus on a small subset of the input elements, rather than processing the entire input sequence. This method allows attention to be more focused on the most contributing value factors, thus reducing memory and computing capacity requirements. Based on the idea of sparse attention, various approaches have been proposed, such as an adaptive width-based attention learning mechanism and a dynamic attention mechanism that allows different heads to learn only the region of attention (Sukhbaatar et al., 2019). Zaheer et al. (2020) proposed an O(N) complexity model with three different sparse attentions. Zhuang et al. (2022) sought to make the sparse attention matrix predictable. This paper, however, based on adaptive span attention (Sukhbaatar et al., 2019) to establish a fuzzy span attention, which aims at learning a span-aware representation with the actual needs of information extraction tasks. Our approach differs from previous work in that we aim to obtain a fuzzy span of attention in the process of locating the target, rather than reducing computational and memory overhead. ## 7.3 Lspan And D **In Fsa** ![12_image_0.png](12_image_0.png) Table 7: Performance of FSUIE-base models with different d | Lspan | P | R | F1 | |---------|-------|-------|-------| | 16 | 91.65 | 93.83 | 92.73 | | 30 | 92.58 | 92.40 | 92.49 | | 48 | 92.25 | 92.69 | 92.47 | ![12_image_1.png](12_image_1.png) In Table 7, we present the performance of FSUIE-base models using various hyper-parameter d on the ADE NER test set. In Table 8, we present the performance of FSUIE-base models using various hyper-parameter L*span* on the ADE NER test set. The results demonstrate that the model's performance is not significantly affected by the choice of these hyper-parameters. ## 7.4 Single-Side And Both-Side Ambiguity In Fsl FSL strategies P R F1 ![12_image_2.png](12_image_2.png) both-side 92.58 92.40 92.49 single-side 91.99 92.69 92.34 Actually, there may be cases of single-side ambiguity in the labeling of entity boundaries in the text. Therefore, we demonstrate FSUIE-base models' performance with different FSL strategies in Table 9, where "single-side" means applying FSL only on start boundary and "both-side" means applying FSL on both start and end boundary. The results suggests that the influence of single-sided and both-sided fuzziness on the model's performance is limited, because not all head words are at the end or start, and FSL only performs limited left/right extrapolation on precise boundaries, without affecting the important information provided by the original boundary. For generalization purposes, we utilized both-sides fuzzy span in FSUIE. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section \#Limitation A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Section \#1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. ✓ B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? No response. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? No response. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? No response. D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
michael-etal-2023-nlp
What Do {NLP} Researchers Believe? Results of the {NLP} Community Metasurvey
https://aclanthology.org/2023.acl-long.903
We present the results of the NLP Community Metasurvey. Run from May to June 2022, it elicited opinions on controversial issues, including industry influence in the field, concerns about AGI, and ethics. Our results put concrete numbers to several controversies: For example, respondents are split in half on the importance of artificial general intelligence, whether language models understand language, and the necessity of linguistic structure and inductive bias for solving NLP problems. In addition, the survey posed meta-questions, asking respondents to predict the distribution of survey responses. This allows us to uncover false sociological beliefs where the community{'}s predictions don{'}t match reality. Among other results, we find that the community greatly overestimates its own belief in the usefulness of benchmarks and the potential for scaling to solve real-world problems, while underestimating its belief in the importance of linguistic structure, inductive bias, and interdisciplinary science.
# What Do Nlp Researchers Believe? Results Of The Nlp Community Metasurvey Julian Michael1 , Ari Holtzman2 , Alicia Parrish3∗ , Aaron Mueller4 , Alex Wang5∗ , Angelica Chen1 , Divyam Madaan1 , Nikita Nangia1 , Richard Yuanzhe Pang1 , Jason Phang1 , and Samuel R. Bowman1,6∗ 1New York University, 2University of Washington, 3Google, 4Johns Hopkins University, 5Cohere, 6Anthropic nlp-metasurvey-admin@nyu.edu ## Abstract We present the results of the NLP Community Metasurvey. Run from May to June 2022, it elicited opinions on controversial issues, including industry influence in the field, concerns about AGI, and ethics. Our results put concrete numbers to several controversies: For example, respondents are split in half on the importance of artificial general intelligence, whether language models understand language, and the necessity of linguistic structure and inductive bias for solving NLP problems. In addition, the survey posed *meta*-questions, asking respondents to predict the distribution of survey responses. This allows us to uncover *false sociological beliefs* where the community's predictions don't match reality. Among other results, we find that the community greatly overestimates its own belief in the usefulness of benchmarks and the potential for scaling to solve real-world problems, while underestimating its belief in the importance of linguistic structure, inductive bias, and interdisciplinary science. ## 1 Introduction What do NLP researchers think about NLP? Are we devoting too many resources to scaling up? Do language models understand language? Is the traditional paradigm of model benchmarking still tenable? What models are ethical for researchers to build and release? These questions and many more are actively debated in the research community, and views on them are a major factor in deciding what work gets done. Understanding the prevalence of different views on these issues is valuable for understanding the trajectory of NLP research and the structure of the field. In addition, communication among researchers often rests on *sociological beliefs* about these questions: what people think people think. Getting these sociological beliefs wrong can slow ∗Work done while at NYU. down communication and lead to wasted effort, missed opportunities, and needless fights. For example, researchers might spend time and effort defending a widely-believed position which they mistakenly think is controversial, or might fail to argue effectively if they appeal to premises which they believe are well-established but are actually contentious or unpopular. Many of the ways that the NLP research community gets to know itself—invited talks, panel discussions, social media, etc.—are biased, for example through self-selection towards similar people and amplification of already-prominent and controversial voices. This makes it difficult to get a sense of the community's beliefs as a whole. For these reasons, we believe it is worth trying to objectively assess NLP researchers' views on controversial issues. So from May to June 2022, we conducted the **NLP Community Metasurvey**. We present stances, such as *Currently, the field focuses too* much on scaling up machine learning models (Q51), and ask respondents whether they agree or disagree. Then we ask them to predict what percent of respondents who will agree. This gives us insight into the community's object-level beliefs as well as its sociological beliefs, and allows us to identify where the two may be misaligned. This work is directly inspired by the PhilPapers Surveys (Bourget and Chalmers, 2014, 2020), an initiative by philosophers to assess the philosophy community's beliefs about current topics in their field and their sociological beliefs about their professional community. The rest of this document reports the methodology and results of the NLP Community Metasurvey. Our results are *descriptive*, not prescriptive, as these issues cannot be resolved by majority vote. By necessity, we are covering a subjectively chosen set of questions and reducing many complex issues into simplified scales, but we hope that these results can create common ground for fruitful discussion among NLP researchers. 16334 Gender ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) ## 2 Methodology Survey Construction The survey questions are shown in Figures 3 and 4. We present them in thematic sections (e.g., *State of the Field*), phrased as statements expressing a certain opinion (e.g., Q1-1. Private firms have too much influence in guiding the trajectory of the field). Respondents answer on a 4-point scale of AGREE, WEAKLY AGREE, WEAKLY DISAGREE, and DISAGREE. We don't include a neutral middle option because our intent is to push respondents to consider which side they stand on; we instruct them to choose WEAKLY AGREE or WEAKLY DISAGREE if they have even slight preferences for one side or the other (e.g., "depends, leaning negative"). For cases where they truly cannot make a judgment, we include three OTHER answers: QUESTION IS ILL-POSED, IN-SUFFICIENTLY INFORMED ON THE ISSUE, and PREFER NOT TO SAY. At the end of each section, we ask respondents to predict the percentage of our target population who will either AGREE or WEAKLY AGREE with each statement. Respondents choose one of five buckets: 0–20%, 20–40%, 40–60%, 60–80%, or 80–100%. These questions can be skipped, but we encourage best guesses even if unsure. Each section has a freeresponse box for feedback. Survey instructions are reproduced in full in Appendix E. Target Demographic: Active Authors in ACL We define the target population as all co-authors on at least two *CL papers published in the last three years. This allows us to estimate response bias by comparing to ACL members (see §3) and provides an objectively defined group for respondents to make predictions about in meta-questions. Platform and Distribution We used NYU Qualtrics to host the survey. Following guidelines set out by the NYU IRB (FY2022-6461), all respondents gave informed consent before beginning the survey and could skip or refuse to answer each question. We set up a homepage for the survey at https:// nlpsurvey.net and advertised with Twitter posts, an email via the ACL Member Portal, flyers and stickers at the ACL 2022 conference, personal emails, and other methods (details in Appendix A.3). As an incentive, we committed to donating $10 for each respondent to one of several non-profits, chosen by the respondent at the end of the survey (see Appendix A.4 for donated amounts). ## 3 Demographics 480 people completed the survey, of which 327 (68%) are in our target demographic, reporting that they co-authored at least 2 ACL publications between 2019 and 2022. Based on the ACL Anthology, 6323 people met this requirement during the survey period, so we have responses from about 5% of the total. For the rest of this paper, we restrict all reported results to this subset. Demographic information is shown in Figures 1 and 2. Response Bias Figure 1 shows location and gender statistics. Survey respondents are mostly men (67%) and mostly from the United States (58%). Comparing to recent official statistics from the ACL2suggests that the US is overrepresented (58% > 35%), while Asia/Pacific is underrepresented (8% < 26%). We suspect this is largely due to biases in our survey distribution methods (particularly Twitter and our personal networks). Our gender distribution is roughly comparable to available ACL statistics.3 publications related to NLP. Information on the subfields of respondents' work is in Appendix C, Figure 11. Twitter Use Respondents most commonly heard about the survey through Twitter (37%), which is used by a large majority (18% routinely post, 58% follow but don't often post). While we don't know what proportion of our target population uses Twitter, it seems likely that this is a large source of response bias. At the same time, the purpose of this survey is partly to study NLP researchers' perceptions of the NLP community for the purpose of improving the public and scientific discourse. To the extent that these perceptions are formed on Twitter, and the public discourse is carried out on Twitter, our results may be useful even if biased towards Twitter users. ## 4 Results All agree/disagree questions and their responses are shown in Figures 3 and 4. An extended version of this discussion addressing every question is in Appendix D. In the rest of this section, we will discuss some highlights.4 ## Belief In Scaling Maximalism Is Rare And Greatly overestimated, but concerns about AGI are not uncommon. Only a small minority (17%) of respondents agree with the hypothesis that scaling up current systems and methods could solve "practically any important problem" in NLP (Figure 3b, Q2-1), but this view is perceived as much more popular (at 47% predicted agreement). Similarly, a large majority believes that NLP research is focusing too much on scaling up (71% > 58% predicted, Figure 4a, Q5-1). This suggests that the popular discourse around recent developments in scaling up (Chowdhery et al., 2022) may not be reflective of the views of the NLP research community as a whole, which may not have bought into the Bitter Lesson (Sutton, 2019) 5as much as it thinks it has. Yet, narrow majorities of respondents regard recent progress in large-scale modeling as progress towards artificial general intelligence (AGI; 57%, Figure 3c, Q3-2), and think AGI should be a concern for NLP researchers (58%, Q3-1). So while 4A web interface for exploring the results, including the relationship between questions and demographic variables, is available at https://nlpsurvey.net/results. 5"General methods that leverage computation are ultimately the most effective, and by a large margin" (Sutton, 2019). ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) 30%**1-3. NLP winter is coming (10 years)** 62%**1-4. NLP winter is coming (30 years)** 67%**1-5. Most of NLP is dubious science** 63%**1-6. Author anonymity is worth restricting preprints** (a) *State of the Field.* ![3_image_2.png](3_image_2.png) 17%**2-1. Scaling solves practically any important problem** ![3_image_3.png](3_image_3.png) 50%**2-2. Linguistic structure is necessary** 51%**2-3. Expert inductive biases are necessary** 61%**2-4. Ling/CogSci will contribute to the most-cited models** (b) *Scale, Inductive Bias, and Adjacent Fields.* ![3_image_4.png](3_image_4.png) 58%**3-1. AGI is an important concern** ![3_image_5.png](3_image_5.png) 57%**3-2. Recent progress is moving us towards AGI** 36%**3-4. AI decisions could cause nuclear-level catastrophe** (c) *AGI and Major Risks.* ![3_image_6.png](3_image_6.png) 51%**4-1. LMs understand language** ![3_image_7.png](3_image_7.png) 67%**4-2. Multimodal models understand language** 36%**4-3. Text-only evaluation can measure language understanding** (d) *Language Understanding.* ![4_image_0.png](4_image_0.png) 72%**5-1. There's too much focus on scale** ![4_image_1.png](4_image_1.png) 37%**5-3. On the wrong track: model architectures** 41%**5-4. On the wrong track: language generation** 50%**5-5. On the wrong track: explainable models** 42%**5-6. On the wrong track: black-box interpretability** (a) *Promising Research Programs.* ![4_image_2.png](4_image_2.png) ![4_image_3.png](4_image_3.png) 59%**6-3. It is unethical to build easily-misusable systems** 74%**6-4. Ethical and scientific considerations can conflict** 25%**6-5. Ethical concerns mostly reduce to data quality and model accuracy** 48%**6-6. It is unethical to predict psychological characteristics** 60%**6-7. Carbon footprint is a major concern** 41%**6-8. NLP should be regulated** the most extreme form of scaling maximalism may be unpopular, many researchers seem to think it is worth carefully considering the role of scale in long-term progress. It is worth considering how views on these issues may have changed since the survey was run in mid2022: Bubeck et al. (2023) argue that GPT-4 (OpenAI, 2023) is a step towards artificial general intelligence, and scaling training data has brought more performance gains (Anil et al., 2023), but state-ofthe-art models also leverage new techniques such as Constitutional AI (Bai et al., 2022). Belief in the value of interdisciplinary insights is greatly underestimated, with predicted trend reversals in favor of theoretically-informed approaches to NLP. A large majority of respondents believe that NLP researchers should place higher priority on incorporating insights from relevant domain sciences such as sociolinguistics, cognitive science, and human–computer interaction (82%, Figure 4a, Q5-7), while greatly underestimating the number of other NLP researchers that share this belief (53% predicted). Relatedly, more respondents than expected believe that theoretically-informed approaches to NLP are necessary for solving real-world NLP problems, either through discrete representations of linguistic structure (50% > 38% predicted, Figure 3b, Q2-2) or expert-designed inductive biases (51% > 31% predicted, Q2-3). A majority also believe that it's likely for one of the five most-cited system in 2030 to take direct inspiration from results from the last ![5_image_0.png](5_image_0.png) 50 years of linguistics or cognitive science research (61% > 42% predicted, Q2-4). From these results, it seems that many believe there will be a reversal of the current trend of end-to-end modeling with low-bias neural network architectures. ## Agi And Lms Understanding Language Are known controversies. We find that the community is nearly evenly split on two controversial issues: whether we should be concerned with AGI (58%, Figure 3c, Q3-1), and whether language models understand language (51%, Figure 3d, Q4-1; Bender and Koller, 2020; Michael, 2020; Potts, 2020; Merrill et al., 2021; Bommasani et al., 2021). For both questions, the average metaresponse almost perfectly matches the actual percent of people who agree (within 2%), indicating that the community has a good sense that this is a controversial issue. It is worth acknowledging what this means: these are controversial issues, the community knows that they are controversial, and now (courtesy of this survey) we can know that we know that it's controversial. Some may believe that AGI is obviously coming soon, and some may believe that it's obviously ill-defined; some may believe that language models obviously understand language, and some may believe it's obviously impossible in principle. But for both issues, taking either position for granted in the public discourse or scholarly literature may not be an effective way to communicate to a broad NLP audience. Rather, careful and considered discussion of the issue will be more productive for building common ground. The net impact of NLP is believed to be good, but with high stakes. Large majorities of respondents believe NLP research has had a positive impact on the world (89%, Figure 4b, Q6-1), will have a positive impact in the future (87%, Q62), and could plausibly transform society (73%, Figure 3c, Q3-3). Despite this optimism, a substantial minority also foresee plausible risks of a major global catastrophe caused by ML systems (36%, Q3-4). Perhaps surprisingly, 23% of respondents both think the future impact of NLP will be good (Figure 4b, Q6-1) and that AI could plausibly cause a major global catastrophe (Q3-3). Assuming that respondents are consistent about their views, we may understand this to mean that *plausible* catastrophic risk did not necessarily mean *likely* for many respondents, or that the potential upside (e.g., of transforming society) is large enough to outweigh the risk. ## 4.1 Problem Formulation And Task Design: An Important Frontier? In addition to the agree/disagree questions constituting most of the survey, we ask respondents where they think the most influential advances of the next 10 years will come from, providing four choices: Hardware and data scaling, Algorithmic improvements in ML, *Data procurement and labeling practices*, and *Problem formulation and task design*. We also provide an "other" option for people to specify their own answer. As the meta-question, we ask respondents to rank the answers from most popular (1) to least popular (5).6 620% of respondents rank "Other" on top (rank 1) for this question, even though very few actually provided it as their answer. These respondents may have ranked the answers backwards by mistake. To correct for this, we reverse the rankings provided by everyone who ranked "Other" first. While not a surefire fix, it doesn't seem to change any major trends in the results (Appendix A.5, Figure 6). | Q1 | Q2 | ρs | |-------------------------------------------------|--------------------------------------------------------------|-------| | Q1-1. Private firms have too much influence. | Q5-1. There's too much focus on scale. | +0.43 | | Q3-2. Recent progress is moving us towards AGI. | Q4-1. LMs understand language. | +0.42 | | Q1-5. Most of NLP is dubious science. | Q5-3. On the wrong track: model architectures. | +0.40 | | Q5-1. There's too much focus on scale. | Q6-7. Carbon footprint is a major concern. | +0.38 | | Q1-5. Most of NLP is dubious science. | Q4-2. Multimodal models understand language. | +0.38 | | Q2-2. Linguistic structure is necessary. | Q4-1. LMs understand language. | -0.38 | | Q4-1. LMs understand language. | Q5-1. There's too much focus on scale. | -0.35 | | Q1-1. Private firms have too much influence. | Q6-7. Carbon footprint is a major concern. | +0.35 | | Q2-2. Linguistic structure is necessary. | Q5-7. We should incorporate more interdisciplinary insights. | +0.35 | | Q2-2. Linguistic structure is necessary. | Q5-1 There's too much focus on scale. | +0.34 | Table 1: Top 10 Spearman correlations (ρs) between pairs of questions from distinct categories (i.e., with distinct first numbers). Correlations |ρs| > 0.11 are significant with p < 0.05. The results (Figure 5) reveal one surprise: the popularity of the top answer, *Problem formulation* and task design, was greatly underestimated. A plurality of 33% of respondents gave this answer, but it was only ranked first by 12% of people and it ranked third on average in respondents' predictions. Besides this, predictions roughly tracked reality. This suggests that there is a fairly common but underappreciated belief in the NLP community that researchers should be working on new ways of formulating the problems we're trying to solve, and that such work could have high impact. ## 5 Correlation Analysis This survey provides us the opportunity to examine the relationships between opinions: Which beliefs tend to come together, and which don't? To get a sense of this, we compute pairwise Spearman (rank-order) correlations between pairs of questions (§5.1), and demographic characteristics (§5.2), and perform a clustering analysis on our results using PCA (§5.3). ## 5.1 Question Correlations We compute Spearman (rank-order) correlations between all of the agree/disagree questions (the full matrix is in Appendix F, Figure 22), ordering the answers [DISAGREE, WEAKLY DISAGREE, OTHER, WEAKLY AGREE, AGREE], where OTHER includes INSUFFICIENTLY INFORMED ON THE IS-SUE, QUESTION IS ILL-POSED, and PREFER NOT TO SAY. Unsurprisingly, correlations tend to be stronger between questions in the same sectionfor example, perspectives on linguistic structure and inductive bias (Figure 3b), AGI (Figure 3c), language understanding (Figure 3d), and NLP's net impact on society (Figure 4b, Q6-1, Q6-2). Beyond these, Table 1 shows the highest-magnitude correlations between questions in different sections. Concerns about private influence track concerns about scale. The strongest cross-section correlation is between Q5-1 (there's too much focus on scale) and Q1-1 (private firms have too much influence, ρs = 0.43). Regarding scale, Q5-1 was also moderately correlated with Q6-7 (carbon footprint is a major concern, ρ = 0.38). These correlations suggest that NLP researchers who see the influence of industry as problematic may hold this view in part because of concerns with the large-scale, compute-intensive research paradigm that is spearheaded largely by private firms. Believing that LMs understand language is predictive of belief in AGI and the promise of scale. Agreeing that text-only models can meaningfully "understand" language (Q4-1) is predictive of several other views. Those who agree with Q4-1 are more likely to believe that scaling has moved us towards AGI (Q3-2, ρs = 0.42) and can solve practically any NLP problem with existing techniques (Q2-1, ρs = 0.30, not shown), and less likely to believe that there is too much focus on scale (Q5-1, ρs = −0.35), that linguistic structure is necessary to solve important NLP problems (Q2-2, ρs = −0.38), or that we should do more to incorporate insights from domain sciences (Q5-7, ρs = −0.30, not shown). This suggests the existence of distinct "LM optimist" and "LM pessimist" positions, where people either believe that scaling up could solve most NLP problems and potentially lead to AGI, or they think scaling is overprioritized, AGI is less likely, and we should be focusing more on seeking insights and methods from linguistics, | Demographic | Question | ρs | |---------------------------------------|--------------------------------------------------------------|-------| | Sector: Industry (for-profit) | Q1-1. Private firms have too much influence. | -0.25 | | Gender: Woman | Q5-7. We should incorporate more interdisciplinary insights. | +0.24 | | Location: United States | Q4-2. Multimodal models understand language. | +0.22 | | Location: United States | Q4-1. LMs understand language. | +0.22 | | Sector: Academia (including students) | Q1-1. Private firms have too much influence. | +0.21 | | Gender: Man | Q6-2. NLP's future net impact will be good. | +0.21 | | Sector: Industry (for-profit) | Q5-7. We should incorporate more interdisciplinary insights. | -0.21 | | Gender: Woman | Q6-7. Carbon footprint is a major concern. | +0.20 | | Gender: Woman | Q2-2. Linguistic structure is necessary. | +0.19 | | Under-represented Minority: Yes | Q3-4. AI decisions could cause nuclear-level catastrophe. | +0.19 | Table 2: Top 10 Spearman correlations (ρs) between membership in a demographic group and answers to questions. Correlations |ρs| > 0.11 are statistically significant with p < 0.05. Q**4-1.** LMs do understand language +0.36 Q**3-1.** AGI is important +0.48 Q**4-2.** Multimodal models do understand +0.29 Q3-2. We are stepping to AGI +0.38 Q**6-7.** Carbon **isn't** a major concern -0.29 Q**6-3.** Easy misuse is unethical +0.28 Q**5-1.** There **isn't** too much focus on scale -0.28 Q**6-7.** Carbon is a major concern +0.26 Q**2-2.** Linguistic structure **isn't** necessary -0.26 Q**3-3.** Revolutionary change is plausible +0.24 Deep Learning Pessimism (5.2%) Jaded Empiricism (4.0%) Q5-5. **Wrong** track (explainability) +0.33 Q6-6. Not unethical to predict psych. -0.37 Q5-6. **Wrong** track (interpretability) +0.32 Q**1-5.** Most NLP is dubious science +0.30 Q**1-5.** Most NLP is dubious science +0.28 Q5-5. **Wrong** track (explainability) +0.27 Q**3-4.** Catastrophic risk is plausible +0.25 Q5-6. **Wrong** track (interpretability) +0.24 Q**6-2.** NLP **isn't** good (future) -0.23 Has 21+ Pubs. +0.24 Scaling Maximalism (11.7%) Concern about Fast Progress (5.5%) cognitive science, or other domain sciences. It is worth emphasizing, though, that all of our measured correlations are weak to moderate; people hold diverse sets of opinions and individuals cannot be cleanly split into these camps. ## 5.2 Demographic Correlations Spearman correlations between demographic variables and agree/disagree questions are shown in Appendix F, Figure 23, with the top correlations by magnitude in Table 2. We rank agree/disagree answers as in §5.1, and for demographics, we treat each answer choice as a binary variable (1 if chosen by a respondent, 0 otherwise). We exclude demographic values for which we have fewer than 5 responses. Membership in demographic groups does not strongly correlate with answers to any questions in the survey. The strongest correlation is ρs = −0.25, much smaller than the top 10 correlation coefficients between pairs of questions (Table 1). This suggests that there is a diversity of viewpoints in each demographic category, with more variation within demographics than between demographics. Nonetheless, there were a few demographics that were particularly predictive of responses. For example, men and women answered many questions differently. Among our respondents, women are more likely to believe in the importance of linguistic theory and inductive bias, believe that language models don't understand language, and believe that the carbon footprint of training large models should be a concern for NLP researchers. Whereas, men are more likely to believe that the future impact of NLP research will be good, and under-represented minorities are more likely to agree that AI could have catastrophic consequences. ## 5.3 Clustering To analyze the results beyond pairwise correlations, we identify clusters of opinions with principal component analysis (PCA). To do this, we linearize the agree/disagree questions along [−1, 1], with a 0.5 difference between each of DISAGREE, WEAKLY DISAGREE, OTHER, WEAKLY AGREE, and AGREE. For demographics, we treat every answer choice as a binary variable. This process gives us a total of 101 features for each respondent. We run PCA using scikit-learn with 34 components, enough to explain 80% of the variance in the data. The variance in the data is fairly long-tailed, with the first 8 components covering 41.1% of the total variance and the remaining 26 explaining 38.9% (with less than 3% of variance explained by each component in the tail). This indicates that perspectives among NLP researchers may be difficult to reduce to a small number of opposing camps (e.g., pro-scale and anti-scale) without missing a great deal of internal disagreement within those groups. The top four principal components are shown in Table 3. The most prominent cluster of views in the data corresponds to the belief that we should (or shouldn't) be prioritizing large-scale modeling ("Scaling Maximalism"), aligning with the "LM optimist" perspective proposed in §5.1. Another prominent theme is concern with the pace of progress, characterized by a belief that we are making steps towards AGI and that it is an important concern for NLP researchers. While other themes seem to appear in the components after that, no individual cluster of beliefs explains a large amount of the variance in the data, so these clusters are probably not very useful for directly reasoning about the beliefs of individuals, who are combinations of all principal components. ## 6 Discussion With the NLP Community Metasurvey, we have put concrete numbers to many contentious issues in NLP: the necessity of expert-designed inductive bias, the importance of AGI, whether language models understand language, and more. Perhaps more interestingly, we have also made concrete numbers of the community's *impressions* of these controversies, in some cases confirming what we already believe and in others producing surprises. For example, the idea that mere scale will solve most of NLP is much less controversial (and much less believed) than it is thought to be, and NLP researchers unexpectedly agree that we should do more to incorporate insights and methods from domain sciences, and that we should prioritize problem formulation and task design. Interestingly, very few of the issues we ask about (only the necessity of linguistic structure and expert-designed inductive bias, Q2-2 and Q2-3) are noticeably more controversial than respondents expected them to be. This could be due to biases from the amplification of controversy (e.g., in social media), or it could just reflect mundane biases in respondent predictions, e.g., regression towards the middle of the range (∼50%) under uncertainty. There are other biases to keep in mind when interpreting our results: the United States is overrepresented, and senior researchers and academics probably are as well (§3, Appendix C), not to mention unmeasured population biases based in the personal networks of the authors. Many of our questions have multiple possible interpretations, many rely on vague terms like "plausible" or "major concern," and some rely on comparisons to reference points such as the Industrial Revolution (Q3-3) or "specific, non-trivial results from... linguistics or cognitive science" (Q2-4) which may carry different implications for different readers. Given these issues, it is probably best to view the answers to the questions on this survey as reflecting something between objective beliefs and signaling behavior. Agreement or disagreement with a particular statement may indicate where a respondent believes they stand relative to "received wisdom," which would determine what statements would be worth asserting in the context of the status quo; or, a response could be driven by identification with (or rejection of) an already-known ideological camp that the statement is taken to refer to. While these issues affect the way we should interpret the absolute numbers in our results, they should apply equally to the meta-questions, so we believe it is meaningful to compare the survey's actual and predicted results as a way of discovering false sociological beliefs. We hope the results of the NLP Community Metasurvey can help us update our sociological beliefs to closer match reality, creating common ground for fruitful and productive discourse among NLP researchers as we confront these issues in the course of our work. ## Acknowledgments We thank the 480 respondents who completed the survey, as well as (especially) our 26 pilot testers. We also thank Markus Anderljung, Noemi Dreksler, Amandalynne Paullada, and Lucy Lu Wang for helping ideate and providing early feedback on the survey, John Thickstun for helpful discussions about the survey's results, and the anonymous reviewers for useful feedback about its limitations. This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program) and Apple. This material is based upon work supported by the National Science Foundation under Grant Nos. 1746891, 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ## Limitations While we discuss limitations to our survey design and results in §3 and §6, we expand on the issues here. ## Survey Sample And Response Bias Sample size and confidence intervals Our 327 respondents constitute about 5.17% of the total population of our target demographic group. This sample size gives us enough statistical power to meaningfully analyze correlations between questions and demographics (§5), but it leaves a lot of room for potentially strong biases in the results, as we only covered a small portion of the survey population and we did not have a uniform sample. Furthermore, the confidence intervals on the mean meta-question responses (depicted in black in the visualizations in Appendix D) should be taken with a grain of salt, as they are computed on the basis of the midpoints of the buckets chosen by each respondent (i.e., an answer of 0–20% is treated as 10%), potentially underestimating the variance of respondents' actual underlying views. Twitter As described in §3, we advertised the survey on Twitter, and >75% of respondents use Twitter to engage with NLP content. This suggests a strong demographic bias towards Twitter users, which may not be representative of the ACL community as a whole. However, even if we are measuring something closer to the public discourse on Twitter, to the extent that comprises a large portion of the public discourse on research, our results may still be useful. We did advertise in other ways (described in Appendix A), including an email to the ACL Member Portal and sending individualized emails to highlypublishing authors, and a majority of respondents (63%) heard of the survey through something other than Twitter (Figure 2). However, even through these platforms it's likely that responses were biased towards the authors' personal networks, or towards people with personalities or attitudes which which would lead them to be more likely to take such a survey. Geographic biases One of the most concerning sources of response bias is geographic. In comparison to official ACL membership statistics, residents of Asian countries are deeply underrepresented, especially China (3% < 9%), India (1% < 5%), and Japan (0.8% < 5%), compared to the United States, which is overrepresented (58% > 35%). We noticed this disparity, particularly the lack of responses from China, during the survey period. We tried to make extra overtures—as mentioned in Appendix A.3, one of the authors shared the survey on WeChat (and it was reposted by a few other China-based ML researchers), and we individually emailed many of the most highly-published authors in our list (which included many authors based in Asia) as a way of making sure to cover this part of the community. Unfortunately, this yielded few responses from Asia. It's worth noting that Zhao et al. (2022) describe a significant disconnect between the US and Chinese ML communities (at least in terms of their citation networks). The ACL community may have a similar pattern worth learning more about. Industry and career As noted in §3, our responses are overwhelmingly from academics and skew senior. While we don't have statistics on the true proportion of academics or junior versus senior authors in our target demographic, it seems likely that this is a source of response bias. This bias is especially relevant to consider for the questions and meta-questions about academia versus industry (Figure 3a, Appendix D.1). In particular, it seems plausible that some of the disparity between answers and predictions, e.g., in Q1-1 (on whether private firms have too much influence) may have arisen from people assuming that many of the other respondents would be from industry. Similar concerns may apply to, e.g., Q2-1 on scaling maximalism and other questions about scaling, which is strongly associated with industry. Unfortunately, we don't know what the respondents thought about the survey demographics, as we did not include meta-questions in the demographics section. Unobservable confounds and statistical correction The biases above are sources of concern when trying to draw conclusions about the ACL community as a whole. It stands to reason that we may wish to statistically correct for these biases, but we choose not to, for the following reasons: - We don't have ground-truth data on any of the above demographic characteristics of our target population (actively publishing ACL authors) and can only assess under- or overrepresentation using proxies such as ACL membership. Using these proxies to do statistical correction would introduce another potential source of bias. - We don't have enough data to provide meaningful signal in some of the underrepresented groups (e.g., residents of Asia), so correcting for these biases would mostly be amplifying noise. - Adjusting for observed confounds could potentially amplify unobserved confounds—i.e., the small number of Asian residents who did respond are likely not representative of their geographic group at large. This is true to varying degrees for the other demographic variables as well, and would make the results much harder to interpret. So instead of doing corrections, we prefer to report the results as simply as possible while being up front about potential biases so the reader can draw their own conclusions. ## Questions And Answers Strongly worded questions Many of our questions asked for respondents' views on stronglyworded, opinionated statements. This could have biased the survey responses, for example due to framing effects or false presuppositions included in the statement. While we think this is a serious and worthwhile concern, there are several reasons we chose this format anyway: - The issue is, to some extent, unavoidable, as the point of our survey is to address controversial issues, which by their nature will hinge on strong opinions and contestable presuppositions. - This approach allows us to format our survey mostly as agree/disagree questions, which allows for a straightforward meta-question format and uniform computational analysis. - One method of handling this issue in surveys is to have multiple variants of each question (e.g., both negated and non-negated versions), and show each respondent a random one. However, this would introduce more variables with respect to the wording of the questions, complicating our analysis and creating more difficulties in interpreting the results. - The valence of opinions (e.g., expressing the view that scaling up *will* versus *won't* solve practical problems) actually does matter for our purposes: as discussed in §6, one way of interpreting what we are doing with these questions is pointing to salient existing ideological camps. The sets of people who would attack or defend an opinion may not be the exact complements of the sets who would, respectively, attack or defend its negation. Given that we chose this format, we took several measures to try to minimize the influence of potentially leading questions: - When piloting the study, we asked pilot participants for their views on whether the questions were leading, whether they incorporated unnecessary assumptions, or whether they felt underspecified or tricky to answer, and changed the wording accordingly. We also piloted with groups of varying views (e.g., some groups worked more on AI ethics, others more on machine learning). - We included several ways for respondents to opt out of questions which they rejected, allowing them to answer INSUFFICIENTLY IN-FORMED ON THE ISSUE, QUESTION IS ILL-POSED, or PREFER NOT TO SAY. - For issues where we expected the assumptions to be contentious, particularly relating to artificial general intelligence (Figure 3c, Appendix D.3) and language understanding (Figure 3d, Appendix D.4), we explicitly reminded the users to answer according to *their* preferred definitions. - We included a free-response feedback box at the end of each section where respondents could clarify their views or express dissatisfaction with the survey. We carefully looked through this feedback when interpreting the results, and we attempt to summarize respondents' concerns in the detailed results in Appendix D. Unclear questions In writing the questions, we attempt to distill complex and controversial issues into single sentences, and we elicit responses on simplified, 4-point scales. Inevitably, respondents' interpretations of some questions are hard to know for sure, and they varied between respondents, making it more difficult to interpret the results. We can get some insight into this using respondents' free-form feedback. Some examples include the following: - Q6-6 (on predicting psychological characteristics, discussed in Appendix D.6) seems to lump together too many disparate issues for us to be able to draw clear conclusions. - Feedback on Q3-4 (on catastrophic risks of AI) showed that respondents may have answered the question on the basis of various different kinds of scenarios—some said AI causing nuclear war was a ridiculous proposition because it would never be handed the "big red button", while others felt that a mistake by an AI acting even as a minor indicator could plausibly have catastrophic consequences in a tense enough geopolitical landscape (see Appendix D.3). More details are provided for each respective question in Appendix D. Some other potential issues include the following: - Some of our wording was vague: e.g., the words "plausibly" and "plausible" in Q3-3 and Q3-4 may be read weakly to mean *possible*, or more strongly to mean *likely*. The difference between these two interpretations is great, especially when it comes to high-stakes issues like revolutionary societal change or global catastrophic risk. Likewise, extreme reference points like the Industrial Revolution (Q3-3) or all-out nuclear war (Q3-4) may be too far outside of respondents' typical reference frames for them to provide a well-calibrated answer. - Our questions point at active controversies and complex issues, and respondents likely answered survey questions fairly quickly. In this context, respondents may have used attribute substitution heuristics (Kahneman and Frederick, 2005), wherein they substitute a hard question (e.g., *will AI change society more* than the Industrial Revolution?) for an easier one (e.g., will AI make things very weird and different pretty soon?) before answering. Interpreting WEAK **answers** Our main aim with the [DISAGREE, WEAKLY DISAGREE, WEAKLY AGREE, AGREE] Likert scale is to get respondents to indicate which side of each issue they stand on. To do this, we encourage respondents to choose one of the WEAKLY answers even if they are basically on the fence. For example, "depends, leaning negative" would be an answer of WEAKLY DISAGREE (see Appendix E). Our approach seems to work for our purpose, as the proportion of OTHER answers is low (<20%) for all questions. However, this means our analysis likely includes a lot of people who do not strongly hold the opinion indicated by their answer. This may be easy to miss in our analyses and meta-survey comparison, which generally group the DISAGREE and WEAKLY DISAGREE answers (and respectively for AGREE) together to binarize the results. So we encourage readers to bear in mind the distinction of WEAK responses when reading the results for a holistic impression of the NLP community. Responses are not necessarily indicative of the truth As stated in §1, the results of our survey are descriptive of the NLP community, and the issues we ask about (many of them normative questions) cannot be resolved by majority vote. However, some questions do ask respondents to make (somewhat) falsifiable predictions, e.g., about the state of jobs and publication in NLP (Appendix D.1; Q12–Q1-4), the promise of scale, linguistic structure, and inductive bias (Appendix D.2), and the societal effects of future AI systems (Appendix D.3; Q3-3, Q3-4). But it is still unclear whether aggregating the opinions of NLP researchers on such issues constitutes a reliable forecast of future events. Uniquely, our methodology does allow us to measure which answers are more popular than expected, corresponding to the *surprisingly popular* answer selection rule for crowd wisdom proposed by Prelec et al. (2017), which better identifies correct answers than majority vote in the case of factual questions where the answer is only known by a well-informed minority. The results of applying this method to predictions of the future are positive, though more mixed (Lee et al., 2018; Rutchick et al., 2020). While we cannot take the "surprisingly popular" answers as indicative of truth in our case, it is worth considering carefully why some responses were more or less popular than expected. ## References Blaise Agüera y Arcas, Margaret Mitchell, and Alexander Todorov. 2017. Physiognomy's new clothes. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete problems in ai safety. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional ai: Harmlessness from ai feedback. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, page 610–623, New York, NY, USA. Association for Computing Machinery. Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen A. Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher R'e, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the opportunities and risks of foundation models. ArXiv. Nick Bostrom. 2014. *Superintelligence: Paths, Dangers, Strategies*, 1st edition. Oxford University Press, Inc., USA. David Bourget and David J. Chalmers. 2014. What do philosophers believe? *Philosophical Studies*, 170(3):465–500. David Bourget and David J. Chalmers. 2020. Philosophers on philosophy: The PhilPapers 2020 survey. Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In *Proceedings of the 2021 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4843–4855, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems, 33:1877–1901. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. *CoRR*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3719–3728, Brussels, Belgium. Association for Computational Linguistics. Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019. Risks from learned optimization in advanced machine learning systems. John P. A. Ioannidis. 2005. Why most published research findings are false. *PLoS Medicine*, 2. Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)*, pages 3543–3556, Minneapolis, Minnesota. Association for Computational Linguistics. Daniel Kahneman and Shane Frederick. 2005. A model of heuristic judgment. In K. Holyoak and B. Morrison, editors, *The Cambridge Handbook of Thinking* and Reasoning, pages 267–293. Cambridge University Press. Andrey Kurenkov. 2020. Lessons from the PULSE model and discussion. *The Gradient*. Michael D Lee, Irina Danileiko, and Julie Vi. 2018. Testing the ability of the surprisingly popular method to predict nfl games. *Judgment and Decision Making*, 13(4):322–333. William Merrill, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. 2021. Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand? *Transactions* of the Association for Computational Linguistics, 9:1047–1060. Julian Michael. 2020. To dissect an octopus: Making sense of the form/meaning debate. *Blog post*. Sharan Narang, Hyung Won Chung, Yi Tay, Liam Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, and Colin Raffel. 2021. Do transformer modifications transfer across implementations and applications? In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5758–5773, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. OpenAI. 2023. Gpt-4 technical report. David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2022. The carbon footprint of machine learning training will plateau, then shrink. *Computer*, 55(7):18– 28. Christopher Potts. 2020. Is it possible for language models to achieve language understanding. *Medium*. Dražen Prelec, H. Sebastian Seung, and John McCoy. 2017. A solution to the single-question crowd wisdom problem. *Nature*, 541(7638):532–535. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9. Inioluwa Deborah Raji, Emily Denton, Emily M. Bender, Alex Hanna, and Amandalynne Paullada. 2021. AI and the everything in the whole wide world benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Abraham M Rutchick, Bryan J Ross, Dustin P Calvillo, and Catherine C Mesick. 2020. Does the "surprisingly popular" method yield accurate crowdsourced predictions? Cognitive research: principles and implications, 5:1–10. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. Green AI. Communications of the ACM, 63(12):54–63. Rich Sutton. 2019. The bitter lesson. Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won Chung, William Fedus, Jinfeng Rao, Sharan Narang, Vinh Q Tran, Dani Yogatama, and Donald Metzler. 2022. Scaling laws vs model architectures: How does inductive bias influence scaling? arXiv preprint arXiv:2207.10551. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In *Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference* on Natural Language Processing (EMNLP-IJCNLP), pages 11–20, Hong Kong, China. Association for Computational Linguistics. Bingchen Zhao, Yuling Gu, Jessica Zosa Forde, and Naomi Saphra. 2022. One Venue, Two Conferences: The Separation of Chinese and American Citation Networks. In *NeurIPS Workshop on Cultures in AI*. ## A Methodological Details A.1 Choosing Questions We aimed to ask about issues: - which are frequently discussed in the community, - which are the subject of public disagreement, - about which the NLP community often reflects back on itself, especially where people seem to perceive themselves as in the minority (hot takes) or majority (taking something for granted), and - for which, if we understand the community's opinions and meta-opinions better, it may aid our ability to communicate and help people understand how to most effectively communicate about their research. With these criteria in mind, we (the authors) brainstormed a large initial list of potential questions. After discussing, we voted on which ones to include in the survey, chose roughly the top 30 questions, finalized the agree/disagree question format, and began pilot testing (described in Appendix A.2), which we used to refine the set of questions, their phrasing, and their presentation format in the survey. The questions used in the survey are shown in = Figures 3 and 4. ## A.2 Pilot Testing The first author conducted 6 pilot studies with about 26 different participants from Computer Science and Linguistics departments, mostly based in the United States, during the months of February and March of 2022. After pilot participants took the survey, they were asked for feedback in a group Zoom call. Participants were asked about any questions they perceived as leading, reasons they might have refused to answer questions, reasons they might have wanted to stop taking the survey in the middle, and whether the purpose of the survey was clear, etc. The survey instructions and question wording were updated in accordance with their feedback. ## A.3 Distribution And Advertisement To reach a broad audience of NLP researchers, we set up a homepage for the survey and advertised in the following ways: (a) ACL Member Portal: we sent a call for participation to the ACL membership mailing list. The email included the details of the survey, its purpose, and the charitable donation incentive. (b) ACL 2022 in Dublin: Four of our team members advertised the survey to conference attendees in-person. They distributed flyers/posters of our survey and free stickers that said "NLP survey" or "I took the NLP survey." (c) Twitter: We released multiple tweets as advertisement, with the original being retweeted 100+ times. (d) Slack channels: We posted about the survey in the Slack channels of a few labs, as well as an NLP Slack channel with more than 470 members that was set up during ACL 2020. (e) Emails: We attempted to encourage more participants from senior authors by sending personal invitations to 568 authors that have published at least eight qualifying papers since 2019 (we did not exhaustively email all of them, as it required manually sourcing email addresses based on names in the ACL Anthology). (f) Other social media (including WeChat, to encourage participation from researchers in China) and personal interactions with NLP researchers. ## A.4 Donations Based on respondent preferences, we donated $950 to the WHO COVID-19 Solidarity Response Fund,7 $1,650 to GiveWell's Maximum Impact Fund,8 $830 to GiveDirectly,9and $1,140 to the Distributed AI Research Institute.10 23 respondents (5%) did not provide an answer to this question, so we did not make donations on their behalf. ## A.5 Data Postprocessing On the demographic questions for which we received "other" answers (career stage, gender, and how respondents heard about the survey), we manually assign each such response to the closest available answer choice for the purposes of reporting in §3 and Appendix C. As mentioned in §4.1, 20% of respondents who answered the meta-question on likely sources of future advances ranked "Other" as the most common answer to the question. We assume these were mistakes, since it is unlikely that people would think a plurality of respondents would reject all of the (fairly broad) provided options. We take this, then, to mean the rankings provided by these respondents were probably reversed, with 5 being the most common and 1 the least common. So we reverse the rankings provided by these respondents for the purposes of analysis. Besides changing the "Other" ![15_image_0.png](15_image_0.png) statistics, this does not seem to have a noticeable effect on overall trends (Figure 6). ## B Overview Of Responses See Figure 7 for an overview of the responses to agree/disagree questions including OTHER answers. OTHER answers were fairly uncommonnever above 20% for any question—and the most frequent one was INSUFFICIENTLY INFORMED ON THE ISSUE, which many respondents gave for the questions about an NLP winter (Q1-3, Q1-4) and the "wrong track" questions (Q5-{3–6}). ## C Detailed Demographics Figures 9–12 show full results for demographics questions. Results are restricted to the target demographic of those with at least 2 *CL publications in the last three years. Numeric labels for percentages below 5% are omitted for space. ## D Detailed Results In this section we discuss the results of each section of the survey in detail. For each question (for example, in Figure 13), we display the proportion of AGREE, WEAKLY AGREE, WEAKLY DIS-AGREE, and DISAGREE answers in a band along the bottom of the visualization. These percentages exclude those who gave one of the OTHER answers (INSUFFICIENTLY INFORMED ON THE IS-SUE, QUESTION IS ILL-POSED, or PREFER NOT TO SAY), which were relatively rare (<20% of responses for all questions, <10% for 75% of questions; see Appendix B, Figure 7). The vertical ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ![17_image_0.png](17_image_0.png) ![18_image_0.png](18_image_0.png) ![19_image_0.png](19_image_0.png) green line shows the total percentage who AGREE or WEAKLY AGREE with the statement, which was what we asked respondents to predict in the metaquestions. The grey bars show the distribution of meta-question answers, each bin aligned with its corresponding range of percentages (0%–20%, 20%–40%, etc.). The green and black dots and bars show the mean and 95% bootstrap confidence intervals of the true and predicted percentage of people who AGREE or WEAKLY AGREE (treating each meta-question answer bucket as its midpoint). Unless otherwise stated, all percentages and percentage differences mentioned in this section will be in absolute terms and exclude the 'other' answers, and when we refer to respondents "agreeing" with a statement (without special typesetting) we include both AGREE and WEAKLY AGREE answers (and respectively with DISAGREE and WEAKLY DISAGREE). In this discussion, we will sometimes break down results by demographic group (e.g., comparing the agreement rates of men and women); unless otherwise stated, all such comparisons correspond to statistically significant differences between the groups (p < 0.05) by a bootstrap test. More information, visualizations, and confidence intervals for such comparisons can be found online at https://nlpsurvey.net/results/. ## D.1 State Of The Field (Figure 13) The first set of questions asks for opinions about the health of the NLP community. Industry is seen as having undue influence (Q11, Q**1-2).** Private firms are overwhelmingly seen as likely to produce the most-cited research of the next 10 years (Q1-2, 82%), but they are also seen as having too much influence (Q1-1, 74%). This suggests, as some respondents pointed out in the survey feedback, that many believe that number of citations is not a good proxy for value or importance. It also suggests a belief that industry's continued dominance will have a negative effect on the field, perhaps through their singular control of foundational systems such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022), or from the energy that widely-cited work in pretraining (Devlin et al., 2019; Radford et al., 2019) draws away from other research agendas. Respondents under-predict the popularity of the majority view by more than 15% on both of these questions, suggesting they might believe alternative agendas are already under-prioritized, such as directions focusing on incorporating interdisciplinary insights as opposed to raw scaling, or problem formulation and task design—other under-predicted views, as we will see in Appendix D.2, Appendix D.5, and §4.1). The under-prediction of agreement on Q1-1 and Q1-2 may also be an artifact of our sample population, which is overwhelmingly academic. Opinions are very different between job sectors, where 82% in academia agree that private firms have too much influence (Q1-1) compared to only 58% of respondents in industry. ## Nlp Winter Is Expected By A Majority In The long term (Q1-3, Q**1-4).** We ask respondents whether they expect there to be an "NLP winter," where funding and job opportunities fall by at least 50% from their peak, in the near future. A substantial minority of 30% expect this to happen within the next 10 years (Q1-3), with only 7% AGREEing. For the next 30 years (Q1-4), confidence is much greater, with 62% expecting an NLP winter. Even a minority predicting such a major shift in the field reflects an overall belief that NLP research will undergo substantial changes in the near future (at least, in who is funding it and how much). Further interpretation of these results is difficult: For example, respondents may believe an NLP winter will arrive because the pace of innovation will stall (perhaps the reason they think industry research is overemphasized), because the ability to advance the state of the art will be monopolized by a small number of well-resourced industry labs (as they expect industry to continue producing widely-cited research), or because the distinction between NLP and other AI disciplines will disappear (as suggested by some respondents). NLP is mostly seen as a scientifically dubious (Q**1-5).** A majority agrees that most NLP work is of "dubious scientific value" (67%). Respondents expressed uncertainty over what should count as "dubious," as well as concerns about who determines the value of research. On one hand, such research could refer to work which is fundamentally unsound with ill-posed questions and meaningless results, which would be a powerful indictment of NLP research. On the other hand, it could simply mean that many reported findings are of little importance or are not robust, which would arguably not make NLP unique among sciences (Ioannidis, 2005). Either way, this result suggests that many ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) The most widely-cited papers of the next 10 years are more likely to come out of 1-3. NLP winter is coming (10 years) I expect an "NLP winter" to come within the next 10 years, in which funding and job opportunities in NLP R&D fall by at least 50% from their peak. 1-4. NLP winter is coming (30 years) I expect an "NLP winter" to come within the next 30 years, in which funding and job opportunities in NLP R&D fall by at least 50% from their peak. 1-5. Most of NLP is dubious science A majority of the research being published in NLP is of dubious scientific value. 1-6. Author anonymity is worth it Author anonymity during review is valuable enough to warrant restrictions on the dissemination of research that is under review. NLP researchers think it is worth reflecting deeply on the value of our work. As respondents see the community being less critical than it actually is (by 19% absolute), it might be that those who are critical of the scientific standards of the field are not as likely to voice their views in public, or that vocal critics who exist are seen as less representative of the population than they actually are. Anonymity is still controversial (Q**1-6).** *CL conferences have much stricter anonymity policies than many other conferences NLP researchers submit to (e.g., NeurIPS, ICLR, and ICML). Responses suggest the community is in favor of these policies on balance (63% agree anonymity is important enough to warrant restrictions on disseminating preprints), though they are perceived as contentious: respondents guessed that around 51% of the target population would be in favor of such restrictive policies. Since the *CL anonymity policies have been subject to intense debate on platforms such as Twitter, this suggests that those critical of the policies may have been disproportionately represented in the minds of NLP researchers. This question was also split by gender, with 77% of women agreeing but only 58% of men—possibly due to concerns or experience with discrimination on the basis of author identity. ## D.2 **Scale, Inductive Bias, And Adjacent Fields** (Figure 14) Questions and meta-questions about the long-term potential of scale, inductive bias, and linguistic structure reveal some of the most striking mismatches between respondent attitudes and beliefs about those attitudes. Broadly speaking, the proscale and anti-structure views were much less popular than respondents thought they would be. A common refrain in the era of ever-larger models is the *Bitter Lesson* (Sutton, 2019): "General methods that leverage computation are ultimately the most effective, and by a large margin." Under this perspective, one may expect linguistic structure or expert-designed inductive biases to be superseded by learning mechanisms operating on fewer, more general principles (given enough training data and model capacity). While the success of deep learning and large language models may be taken ![22_image_0.png](22_image_0.png) 2-1. Scaling solves practically any important problem Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP. 2-2. Linguistic structure is necessary Discrete general-purpose representations of language structure grounded in linguistic theory (involving, e.g., word sense, syntax, or semantic graphs) will be necessary to practically solve some important real-world problems or applications in NLP. 2-4. Ling/CogSci will contribute to the most-cited models It is likely that at least one of the five most-cited systems in 2030 will take clear inspiration from specific, non-trivial results from the last 50 years of research into linguistics or cognitive science. ``` 0% 20% 40% 60% 80% 100% 51% 2-3. Expert inductive biases are necessary Expert-designed strong inductive biases (à la universal grammar, symbolic systems, or cognitively-inspired computational primitives) will be necessary to practically solve some important real-world problems or applications in NLP. ``` as supporting evidence for the Bitter Lesson, we find that the community has bought into the Lesson far less than it thinks it has. Support for scaling maximalism is greatly overestimated (Q**2-1).** We ask respondents for their views on a strong version of the Bitter Lesson: whether scaling up compute and data resources with established existing techniques can practically solve any important problem in NLP (Q2-1). This is seen as controversial, with respondents predicting a roughly even split of 47% agreement (though variance among predictions was high). However, only a small minority (17%) actually agree with the position, forming the largest discrepancy between predicted and actual opinions in the entire survey. This suggests that the popular discourse around recent developments in scaling up (Chowdhery et al., 2022) may not be reflective of the views of the NLP research community as a whole. ## Trend Reversals Are Predicted For Linguistic Theory And Inductive Bias (Q2-2, Q2-3, Q**2-4).** The rest of the views articulated in this section were seen as less popular than Q2-1, but in reality they were much more popular (albeit still controversial). On what it will take to practically solve any important problem in NLP, 50% agree that explicit linguistic structure will be necessary (Q2-2), and 51% say the same for expert-designed inductive biases (Q2-3). In addition, 61% of respondents say it's likely that one of the five most-cited systems in 2030 will take inspiration from clear, non-trivial results from the last 50 years of linguistics or cognitive science research (Q2-4). All of these views are under-predicted by 12–19%, though the predictions more closely match the responses given by men, as responses to these questions were split by gender. Most notably, women are significantly more likely to agree with Q2-2 that linguistic structure is necessary (65%) compared to men (42%). Women also agreed with Q2-3 (62%) and Q2-4 (69%) more than men (49% and 57%, respectively), but these differences were not statistically significant. Like many respondents, we find these results surprising. It seems that many believe there will be a reversal of the current trend of end-to-end modeling with low-bias neural network architectures. The results for Q2-4 are particularly surprising to us, as even *today*'s most cited systems seem not to satisfy this requirement, building on little more from cognitive science than a rough construal of neurons, *attention*, and *tokens*, which date back much further than 50 years. ## D.3 Agi And Major Risks (Figure 15) The versatility and impressive language output of large pretrained models such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022) have prompted renewed discussions about artificial general intelligence (AGI), including predictions of when it might arrive, whether we are actually advancing toward it, and what its consequences would be. In this section, we ask about AGI and some of the largest possible impacts of AI technology. One concern is that respondents' answers may depend on their definition of "artificial general ![23_image_1.png](23_image_1.png) ![23_image_0.png](23_image_0.png) intelligence," and whether they think it is welldefined at all. Our approach to this problem is to deliberately not provide a definition (which some respondents would surely find objectionable, no matter which definition we choose). Instead, we instruct respondents to answer according to their preferred definition, i.e., *how they think the community should use the term*, as we view this as an important issue to assess when figuring out how to talk about the issue as a community. AGI is a known controversy (Q3-1, Q**3-2).** On the questions explicitly about AGI, respondents were mostly split, with 58% agreeing that AGI should be an important concern for NLP researchers (Q3-1) and 57% agreeing that recent research has advanced us toward AGI in a significant way (Q3-2). The two views are highly correlated, with 74% of those who think AGI is important also agreeing with Q3-2 that we're progressing towards it, while only 37% of people who don't think AGI is important think we're making that kind of progress. The meta-responses split similarly to the objectlevel responses, indicating that the community has a good sense that this is a controversial issue. It is worth acknowledging what this means: AGI is a controversial issue, the community in aggregate knows that it's a controversial issue, and now (courtesy of this survey) we can know that we know that it's controversial. While some may believe that AGI is obviously coming soon, and some may believe that it's obviously ill-defined, taking either position for granted in the public discourse or scholarly literature may not be an effective way to communicate to a broad NLP audience; rather, careful and considered discussion of the issue will be more productive for building common ground. Revolutionary and catastrophic outcomes are a concern (Q3-3, Q**3-4).** 73% of respondents agree that labor automation from AI could plausibly lead to revolutionary societal change in this century, on at least the scale of the Industrial Revolution (Q3-3). This points to a common reason why those who agree with Q3-1 might think AGI is an important concern, especially if we are meaningfully progressing towards it (Q3-2), as it could be fundamentally transformative for society; indeed, all views expressed in this section are positively correlated (see Appendix F). But a significant fraction of respondents (23%) agree with the prospect of revolutionary change (Q3-3) while disagreeing with AGI's importance, suggesting that discussions about long-term or large-scale impacts of NLP research may not need to be tied up in the AGI debate. About a third (36%) of respondents agree that it is plausible for AI to cause a major global catastrophe in this century, at least as bad as all-out nuclear war (Q3-4). While this is a much smaller proportion than those expecting revolutionary societal change (Q3-3), the stakes are extremely high and a substantial minority expressing concern about such outcomes indicates that a deep discussion of such risks may be warranted in the NLP community. While we do not ask how specifically respondents think this could happen, potential reasons for such concerns are discussed by Bostrom (2014); Amodei et al. (2016) and Hubinger et al. (2019). Certain demographics, particularly women (46%) and underrepresented minority groups (53%), were more likely to agree with Q3-4, reflecting pessimism about our ability to manage dangerous future technology perhaps based in the existing track record of disproportionate harm to these groups. Q3-4 received a lot of critical feedback. Some respondents object to "all-out nuclear war" as far too strong, saying they would agree with less extreme phrasings of the question. This suggests that our result of 36% is an underestimate of respondents who are seriously concerned about negative impacts of AI systems. Some respondents comment that AI/ML systems should not be discussed as if they have agency to make decisions, as all AI "decisions" can be traced back to human decisions regarding training data, architecture, how and on what phenomena models are evaluated (or not), and deployment decisions, among other factors. ## D.4 Language Understanding (Figure 16) The question of whether language models understand language has been the subject of some debate in the community (Bender and Koller, 2020; Merrill et al., 2021; Bommasani et al., 2021, §2.6). In this section, we ask some questions relevant to the issue, but one of the challenges is that their answers are highly dependent on how one defines the word "understand." For this reason, as with Appendix D.3, we deliberately choose not to provide a definition, as doing so would risk begging the question or forcing a definition that some would certainly find objectionable. Instead, we instruct respondents to answer according to their preferred definitions, i.e., how they think the community should use the word "understand," as we view this as an important element of the discussion. Many respondents commented that this choice made it harder to respond to the questions in this section, and said they would have preferred a set definition, but only 3–5% responded to any of these questions with QUESTION IS ILL-POSED. LMs understanding language is a known controversy (Q4-1, Q**4-2).** The question of whether language models can understand language (Q3-1) was split right down the middle, with 51% agreeing. This controversy is reflected in people's predictions as well, which average to 49% agreement. Many more (67%) agree once the model has access to multimodal data (images, etc.). As with the importance of AGI (Appendix D.3), whether language models understand language is known to be controversial, and the results of this survey can make it known that it is known. So whatever one's views are on the issue, it will likely be less useful to take those views for granted as a premise when communicating to a broad NLP audience in the public discourse or scholarly literature. Again, careful and considered discussion of the issue will likely be more productive for building common ground. Understanding may be learnable, but not measurable, using text (Q**4-3).** On the question of whether text-only evaluations can measure language understanding (Q4-3), the distribution of predictions was similar to that for language understanding by LMs (Q4-1), averaging 47% predicted agreement. However, unlike Q4-1, only 36% actually agreed with the statement, suggesting that many view it as a separate issue, and some may believe there are things which are learnable from text alone, but cannot be measured using text alone. Responses to questions in this section vary considerably with respondents' gender and location. On LMs understanding language (Q4-1), men are more likely to agree (58%) than women (37%), and people in the US are more likely to agree (61%) than those in Europe (31%). There is also a significant gender difference on Q4-3 regarding text-only evaluation of language understanding, where 43% of men agree as opposed to 21% of women. ## D.5 **Promising Research Programs (Figure 17)** In this section, we ask respondents about the kind of research they think the community should be doing, and which research directions they believe are not heading in the right direction. We choose research agendas to ask about based on criticisms, debates, or findings in the literature and public sphere, for example regarding current practice in benchmarking (Bowman and Dahl, 2021; Raji et al., 2021), the relative value of advances in model architectures (Narang et al., 2021; Tay et al., 2022), the use of language models for generation tasks (Bender et al., 2021), and explainability and interpretability of black-box models (Feng et al., 2018; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019). Scaling and benchmarking are seen as overprioritized (Q5-1, Q**5-2).** Over 72% of respondents believe that the field focuses too much on scale (Q5-1), a view that was underestimated at 58%. This reflects the same pattern as Q2-1, where 16358 ![25_image_0.png](25_image_0.png) 4-3. Text-only evaluation can measure language understanding We can, in principle, evaluate the degree to which a model understands natural language by tracking its performance on text-only classification or language generation benchmarks. ![25_image_1.png](25_image_1.png) the prevalence of pro-scale views is overestimated. An even stronger majority of 88% believe there is too much focus on optimizing performance on benchmarks (Q5-2), a view that is highly correlated with Q5-1 (see §5) and is similarly under-predicted at 65%. On the wrong track? Opinions vary (Q**5-{3–** 6}). We ask whether four specific research directions are "on the wrong track": model architectures (Q5-3), open-ended generation tasks (Q5-4), explainable models (Q5-5), and black-box interpretability (Q5-6). Respondents are divided on these questions, with agreement rates between 37% and 50%, reflecting that these are controversial issues. In most cases, respondents' predictions also reflect this divide, with a possible exception in explainability (Q5-5), where 50% true agreement is under-predicted at 36%, reflecting that more community members are critical of research in explainable modeling than expected. While we deliberately used the vague phrase "on the wrong track" to get a sense of people's general attitudes, some respondents took issue with the framing of these questions; for example, one asks if it means asking the wrong question or *finding* the wrong solutions. As such, respondents' precise interpretations of these questions may vary. Interdisciplinary insights are valued more than we think (Q**5-7).** The largest disparity between predicted and actual results in this section is on Q5-7, stating that NLP researchers should do more to incorporate insights from relevant domain sciences. While respondents' predictions about the community's opinions split this issue down the middle (53%), in reality 82% agree with the view (an outcome only expected by 11% of respondents). This raises a question: If so many people agree that we should place greater priority on interdisciplinary work (Q5-7), why isn't more such work already happening? One possible explanation is that the responses to Q5-7 are a form of wishful thinking: Few believe that scale will be sufficient to solve our problems (Q2-1, Q5-1), and many think benchmarks are overemphasized (Q5-2) and insights from sciences like linguistics and cognitive science will be necessary for long-term progress (Q2-2, Q2-3). However, perhaps few know how to actually get results or useful insights from an interdisciplinary approach, leading this kind of work to be underrepresented in the literature and public discourse despite high demand for it. This suggests that the real issue may not be that NLP researchers do not assume interdisciplinary work has anything to offer so much as that we lack the knowledge and tools to make such work effective. One caveat with this result is that responses vary significantly by job sector; 85% of those in academia agree with Q5-7 compared to 68% of those in industry, and our survey is mostly academics. Despite this difference, even the industryonly agreement rate is underpredicted, so survey response bias likely does not fully explain the mismatch. ## D.6 Ethics (Figure 18) NLP is seen as good, and maybe extremely good (Q6-1, Q**6-2).** Respondents overwhelmingly regard NLP as having a positive overall impact on the world, both up to the present (89%, Q6-1) and into the future (87%, Q6-2). This strong endorsement of NLP's future impact stands in contrast with substantial worries about catastrophic outcomes (36%, ![26_image_0.png](26_image_0.png) ![26_image_1.png](26_image_1.png) 5-3. On the wrong track: model architectures The majority of research on model architectures published in the last 5 years is on the wrong track. 5-4. On the wrong track: language generation The majority of research in open-ended language generation tasks published in the last 5 years is on the wrong track. 5-5. On the wrong track: explainable models The majority of research in building explainable models published in the last 5 years is on the wrong track. 5-6. On the wrong track: black-box interpretability The majority of research in interpreting black-box models published in the last 5 years is on the wrong track. 5-7. We should do more to incorporate interdisciplinary insights Compared to the current state of affairs, NLP researchers should place greater priority on incorporating insights and methods from relevant domain sciences (e.g., sociolinguistics, cognitive science, human-computer interaction). Q3-4). While the views are anticorrelated,11 a substantial minority of 23% of respondents agreed with both Q6-2 and Q3-4, suggesting that they may believe NLP's potential for positive impact is so great that it even outweighs plausible threats to civilization. Whatever this means, it seems clear that many researchers think the stakes of NLP research may be high in the near future. Interestingly, agreement with Q6-1 and Q6-2 are both underpredicted by more than 15%, suggesting that pessimistic voices may be overrepresented in the public discourse. Responsibility for misuse: Researchers are somewhat split (Q**6-3).** In Q6-3, we ask respondents if they think "it is unethical to build and publicly release a system which can easily be used in harmful ways." This is admittedly vague, and its answer depends on many factors (e.g., how "easily" the system can be used, how it is released, etc.). Our intent with the question is to get a sense of the degree to which respondents feel that researchers bear ethical responsibility for downstream misuse of the systems that they produce, and assess whether the community's views of itself are accurate in these terms. Responses are somewhat split, with a majority of 59% agreeing, and respondent predictions were reasonably accurate, averaging at 57% predicted agreement. But responses varied by gender, with 74% of women agreeing versus 53% of men. It is worth comparing Q6-3 to Article 1.1 of the ACM Code of Ethics,12 which is adopted by the ACL13 and states (among other things): "Computing professionals should consider whether the results of their efforts will... be used in socially responsible ways." Belief in ethical/scientific conflict is underestimated (Q**6-4).** When asked if ethical considerations can sometimes be at odds with scientific progress, 74% of respondents agreedconsiderably more than the average predicted agree- ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) 6-3. It is unethical to build easily-misusable systems It is unethical to build and publicly release a system which can easily be used in harmful ways. 6-4. Ethical and scientific considerations can conflict In the context of NLP research, ethical considerations can sometimes be at odds with the progress of science. 6-5. Ethical concerns mostly reduce to data quality and model accuracy The main ethical challenges posed by current ML systems can, in principle, be solved through improvements in data quality/coverage and model accuracy. 6-6. It is unethical to predict psychological characteristics It is inherently unethical to develop ML systems for predicting people's internal psychological characteristics (e.g., emotions, gender identity, sexual orientation). 6-7. Carbon footprint is a major concern The carbon footprint of training large models should be a major concern for NLP researchers. 6-8. NLP should be regulated The development and deployment of NLP systems should be regulated by governments. ## Ment Rate Of 59%. There are a couple of potential interpretations of disagreement with Q6-4. On one hand, respondents may believe any ethical problems that come up during the course of NLP research can be solved easily or are trumped by the benefits of scientific progress. On the other hand, they might believe that scientific 'progress' which is ethically regressive should not count as 'progress' or is inevitably pseudoscientific. Several views in line with the latter (and none with the former) were expressed in the survey feedback, suggesting that it is likely the dominant interpretation among those who disagree with Q6-4. As disagreement was significantly overpredicted by survey respondents, this view may be overrepresented in the public sphere relative to the proportion of NLP researchers who hold it. Reduction of ethics to data/accuracy is overestimated (Q**6-5).** In light of public debates about the sources and nature of the harms caused by machine learning systems (Kurenkov, 2020), we ask whether the main ethical challenges posed by current ML systems can be reduced to issues with data quality and model accuracy (Q6-5). It is estimated to be a common view, averaging 47% predicted agreement, but is actually fairly uncommon, with only 25% of respondents agreeing. Predicting psychological characteristics is controversial, with caveats (Q**6-6).** In light of discussions about surveillance and digital physiognomy (Agüera y Arcas et al., 2017), we ask whether it is inherently unethical to develop ML systems for predicting internal psychological characteristics like emotions, gender identity, and sexual orientation. Responses were split, with 48% agreeing. This question received a lot of critical feedback, and it is unclear how much of the split is due to differences in opinion versus interpretations of the question. Some respondents object to grouping transient states (e.g., emotion) with persistent traits (gender identity, sexual orientation), or say their answer depends on whether the trait is legally protected. Some say it depends on the inputs available to the model, and others say that it may not be *inherently* unethical but is ethically permissible in only a tiny set of carefully considered use cases. Which of these elements of context respondents assumed may have played a major role in determining their answers, and future surveys on these issues might benefit from splitting Q6-6 into several questions. Carbon footprint is a concern for many (Q**6-7).** A majority of 60% agree with the statement that the carbon footprint of training large models should be a major concern for NLP researchers (Q6-8). This concern is based in part on trends in computation for machine learning at large scale, as Schwartz et al. (2020) note a 300,000x increase in computation over 6 years leading up to 2019. Following this, Patterson et al. (2022) argue that advances in model efficiency and energy management can soon lead to a plateau in energy use from training machine learning models. Both argue that accountability and reporting of energy use is important for keeping the future carbon footprint of training ML models under control. The responses to Q6-8 indicate that a majority of the community would likely appreciate explicit reporting of energy use in NLP publications as well as work that increases the compute efficiency of model training. Responses to this question varied greatly by gender, with 78% of women agreeing as opposed to 51% of men. NLP researchers are skeptical of regulation (Q68). Finally, we ask if the development and deployment of NLP systems should be regulated by governments (Q6-8). 41% of respondents agree, and while predictions are accurate on average, a large contingent (31%) of respondents predicted a very low agreement rate of 0–20%. We intend Q6-8 as a weak statement, i.e., that there should be any regulations around the development and deployment of NLP systems. However, respondents ask for more nuance, remarking that the answer depends on development versus deployment, details about use cases, and whether we only mean NLP-specific regulations or also include more general regulations on things like energy use or data privacy. As respondents may have come to this question with different assumptions or interpretations around such issues, it is hard to read into the specific implications of this result, except that respondents express a gen- ## E Survey Instructions The text of our consent form is reproduced in Figure 19 and the survey instructions are reproduced in Figure 20, and examples of how it looks in the browser are given in Figure 21. ## F Correlation Matrices Spearman correlations between questions are shown in Figure 22, and correlations between questions and demographic variables are shown in Figure 23. See §5 for a discussion of the correlation analysis. ## Consent Form For Irb-Fy2022-6461 You have been invited to take part in a research study to learn more about the beliefs that natural language processing (NLP) researchers hold about the NLP research field, as well as their corresponding meta-beliefs about what other NLP researchers think. This study will be conducted by Prof. Samuel R. Bowman, CIMS - Center for Data Science, Courant InsRtute of MathemaRcal Sciences, New York University. If you agree to be in this study, you will be asked to do the following: - Complete a questionnaire reporting your beliefs on a variety of potentially controversial issues debated in the NLP community. - Report some of your own demographic characteristics and general information about your publication and research experience. Participation in this study will take approximately 20 minutes. There are no known risks associated with your participation in this research beyond those of everyday life. Although you will receive no direct benefits, the investigators will donate $10 to a non-profit organization or fund of your choice (four options are listed at the end of the survey) on your behalf, with the total capped at $10,000 distributed in proportion to the preferences of the first 1,000 people who complete the survey. Confidentiality of your research records will be strictly maintained by keeping the full data, with opt-in de-anonymization (email addresses only), in a private directory in the cloud accessible only to the investigators. Email addresses will be stripped from this data before sharing with a small group of researchers to analyze, and we will only publicly share statistical aggregates to minimize the risk of deanonymization after the fact. Information not containing identifiers may be used in future research, shared with other researchers, or placed in a data repository without your additional consent. Participation in this study is voluntary. You may refuse to participate or withdraw at any time without penalty. You have the right to skip or not answer any questions by selecting a prefer not to say option. If there is anything about the study or your participation that is unclear or that you do not understand, or if you have questions or wish to report a research-related problem, you may contact Samuel R. Bowman at bowman@nyu.edu, 60 5th Ave. New York, NY, 10011. For questions about your rights as a research participant, you may contact the University Committee on Activities Involving Human Subjects (UCAIHS), New York University, 665 Broadway, Suite 804, New York, New York, 10012, at ask.humansubjects@nyu.edu or (212) 998-4808. Please reference the study \# (IRB-FY2022-6461) when contacting the IRB (UCAIHS). You may save a copy of this consent document to keep. Figure 19: Full text of the survey consent form. ## The Nlp Community Metasurvey This should take ∼20 minutes to complete. **For the first 1,000 respondents, we will donate $10 on your behalf to one of** several non-profits that you choose at the end of the survey. This is a survey of opinions on issues being publicly discussed in NLP. We (researchers at UW and NYU) invite anyone doing NLP research to take it, though our primary target demographic is **people who have authored or co-authored at least 2** publications in core ACL venues in the past 3 years. Please share this survey widely - we hope to cover as much of the target demographic as possible. For each statement, mark whether you agree or **disagree**. Then, you will report what percentage of community members you think agree with the statement. This will give a sense of whether our community's impression of itself aligns with its members' actual beliefs, and help us improve this alignment, communicate better, and motivate our work more effectively. More details about our motivation can be found at nlpsurvey.net. This was inspired by the PhilPapers surveys (philpapers.org/surveys). How to Answer You will be shown a statement and asked where you stand on an **agree/disagree spectrum**: - Agree - Weakly agree - Weakly disagree - Disagree Choose the answer that best reflects your views. In our analysis, we will interpret "weakly agree" and "weakly disagree" to include marginal views just barely agreeing or disagreeing (e.g., "depends, leaning positive/negative"). However, in case you cannot place yourself on either side of an issue, there will also be three non-answer options: - **Insufficiently informed on the issue**: You don't understand the statement or its subject matter well enough to form an opinion. - **Question is ill-posed**: You reject the distinction between agreeing and disagreeing, or don't think the statement admits any coherent interpretation. - **Prefer not to say**: You don't feel comfortable providing any of the other answer choices. If you pick "question is ill-posed" or "prefer not to say," we would appreciate (optional) feedback at the end of the respective section explaining your reasons so we can better interpret the results. Meta-Questions At the end of each section, you'll be asked to predict what proportion of people on the agree/disagree spectrum will answer **either "agree" or "weakly agree"** to each question. For the purpose of these questions, please predict relative to the target demographic: people with at least 2 publications in core ACL venues in the last 3 years. For our purposes, core venues are ACL, EMNLP, NAACL, EACL, AACL, TACL, and CL (including Findings). By our count from the ACL Anthology, this includes approximately 5,650 people. Even if you don't have a strong sense of the community's stance, give your best guess (unless you really feel like you have no priors, in which case you can skip these questions). At the end of the survey, you will rate your overall confidence in the meta-survey questions so we can account for it in our analysis. Privacy Your responses are anonymous and individual responses will not be released publicly. You will have the option to de-anonymize yourself at the end; this will help us audit the results and follow up with you after the survey, but will not be released publicly or shared with anyone without your permission. In accordance with the General Data Protection Regulation (GDPR), all survey respondents have rights over their personally-identifiable information. This form (nlpsurvey.net/gdpr.pdf) outlines those rights and how to exercise them. A list of the people who will have access to the non-anonymized data is available at nlpsurvey.net/about. You can reach us with questions and concerns at nlp-metasurvey-admin@nyu.edu. Figure 20: Full text of the survey instructions. ![31_image_0.png](31_image_0.png) ![32_image_0.png](32_image_0.png) ![32_image_1.png](32_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section, as well as Secs. 3 and 6 (briefly in context). ✗ A2. Did you discuss any potential risks of your work? The work was a survey of the NLP community and posed no risks to participants beyond those of everyday life. Our study was approved by our institutional IRB with exempt status. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1. ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✗ **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank. ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Informed consent was obtained for personally-identifiable information. The consent form and details on protection of that information are given in Figure 19 (Appendix E). ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Section 3 and Appendix C. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 and Appendix C. ## C ✗ **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Not applicable. Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Not applicable. Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Not applicable. Left blank. D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Sections 2 (methodology) and 3 (demographics). ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix E, Figures 19–21. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 2 and Appendices A.3 and A.4. ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Section 2 and Appendix E, Figure 19. ✓ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Section 2, 'Platform and Distribution'; Appendix E, Figure 19. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Section 3 and Appendix C.
yang-etal-2023-prototype
Prototype-Guided Pseudo Labeling for Semi-Supervised Text Classification
https://aclanthology.org/2023.acl-long.904
Semi-supervised text classification (SSTC) aims at text classification with few labeled data and massive unlabeled data. Recent works achieve this task by pseudo-labeling methods, with the belief that the unlabeled and labeled data have identical data distribution, and assign the unlabeled data with pseudo-labels as additional supervision. However, existing pseudo-labeling methods usually suffer from ambiguous categorical boundary issues when training the pseudo-labeling phase, and simply select pseudo-labels without considering the unbalanced categorical distribution of the unlabeled data, making it difficult to generate reliable pseudo-labels for each category. We propose a novel semi-supervised framework, namely ProtoS2, with prototypical cluster separation (PCS) and prototypical-center data selection (CDS) technology to address the issue. Particularly, PCS exploits categorical prototypes to assimilate instance representations within the same category, thus emphasizing low-density separation for the pseudo-labeled data to alleviate ambiguous boundaries. Besides, CDS selects central pseudo-labeled data considering the categorical distribution, avoiding the model from biasing on dominant categories. Empirical studies and extensive analysis with four benchmarks demonstrate the effectiveness of the proposed model.
# Prototype-Guided Pseudo Labeling For Semi-Supervised Text Classification Weiyi Yang1**, Richong Zhang**1,2∗ , Junfan Chen1, Lihong Wang3**, Jaein Kim**1 1SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China 2Zhongguancun Laboratory, Beijing, China 3CNCERT ## Abstract The semi-supervised text classification (SSTC) task aims at training text classification models with a few labeled data and massive unlabeled data. Recent works achieve this task by pseudo-labeling methods that assign pseudolabels to unlabeled data as additional supervision. However, these models may suffer from incorrect pseudo-labels caused by underfitting of decision boundaries and generating biased pseudo-labels on imbalanced data. We propose a prototype-guided semi-supervised model to address the above problems, which integrates a prototype-anchored contrasting strategy and a prototype-guided pseudo-labeling strategy. Particularly, the prototype-anchored constrasting constructs prototypes to cluster text representations with the same class, forcing them to be high-density distributed, thus alleviating the underfitting of decision boundaries. And the prototype-guided pseudo-labeling selects reliable pseudo-labeled data around prototypes based on data distribution, thus alleviating the bias from imbalanced data. Empirical results on 4 commonly-used datasets demonstrate that our model is effective and outperforms state-ofthe-art methods. ## 1 Introduction Traditional text classification has achieved profound success by leveraging numerous labeled data. However, acquiring large-scale labeled data is expensive in many real-world scenarios, making training effective classifiers using such approaches difficult. Therefore, semi-supervised text classification (SSTC), aiming at text classification requiring a few labeled data and massive unlabeled data, has become a new appealing technique. As a promising semi-supervised paradigm, the pseudo-labeling method (Mukherjee and Awadallah, 2020; Xie et al., 2020; che; Lee et al., 2021; Li ![0_image_0.png](0_image_0.png) et al., 2022), which assigns pseudo-labels to unlabeled data instances as additional supervision, has drawn attention. They follow the well-known lowdensity separation assumption (Chapelle and Zien, 2005) to generate pseudo-labels for the unlabeled data scattered in the high-density feature space, helping to identify the boundaries between classes. Trained with the mixture of labeled and pseudolabeled data, the model perceives more abundant supervision signals and reduces empirical errors. Existing models attempt to assign soft pseudolabels for unlabeled texts (Xie et al., 2020) or keep pseudo-labels with lower uncertainty (Mukherjee and Awadallah, 2020) to alleviate the influence brought by the incorrect pseudo-labeled data. In practice, the *underfitting of decision boundaries* is usually detrimental to the model learning (shown in Figure 1(a)). More careful considerations are expected in properly utilizing these pseudo-labeled data near decision boundaries. In addition, the pseudo-labeling process may meet with *bias from imbalanced data* and will be biased toward dominant class data (shown in Figure 1(b)). This problem may further lead the dominant classes to dominate the follow-up training and assign more incorrect labels to the unlabeled data. Believing that addressing both the underfitting of decision boundaries and the bias from the im- ∗Corresponding author: zhangrc@act.buaa.edu.cn 16369 balanced data is crucial, we propose a prototypeguided pseudo labeling (PGPL) model to tackle the two problems simultaneously. PGPL improves the SSTC performance with a Prototype-Anchored Contrasting (PAC) module and a Prototype-Guided Pseudo Labeling (PGP) module. Specifically, in the PAC module, we leverage the prototypical learning (Bien and Tibshirani, 2011) to construct prototypes with representations of labeled texts and classify unseen texts according to their distance from the prototype of each class. In this way, we force the text representations to locate in high-density areas, making the unreliable pseudo-labeled data near the boundary away from decision boundaries to address the problem, as demonstrated in Figure 1(a). Furthermore, to alleviate the bias from the imbalanced data, our PGP module chooses reliable pseudo-labeled data by reducing the probability of assigning labels of dominant class for unlabeled data, and thus prevents the model from biasing on dominant classes, as demonstrated in Figure 1(b). Finally, we conduct experiments on four commonly-used datasets, and the results demonstrate that our proposed PGPL model outperforms the state-of-the-art performance in various semisupervised settings. The experimental analysis also suggests that our PGPL model successfully separates the different classes and assigns reliable pseudo-labels to the unlabeled instances. These results confirm the effectiveness of PGPL. To summarize, our main contributions are in the following four folds: - We propose a prototype-guided pseudolabeling model for the semi-supervised text classification task, which simultaneously addresses the underfitting of decision boundaries and the bias from imbalanced data. - We present a prototype-anchored contrasting module to generate high-density distributed representations that reduces the unreliable pseudo-labeled data chosen for training the semi-supervised models. - We leverage a prototype-guided labeling module to choose data around prototypes that alleviates the bias from imbalanced data. - Empirical studies indicate that our proposed model outperforms previous state-of-the-art methods on four commonly-used benchmarks and comprehensive analysis confirm the effectiveness of PGPL. ## 2 Related Works 2.1 Semi-Supervised Text Classification Existing methods for SSTC task can be divided into three groups: Generative methods, Graph-based methods and Pseudo-Labeling methods. Generative methods (Croce et al., 2019, 2020; Gururangan et al., 2019) attempt to learn a generative model that establishes a mapping function based on the distribution of the labeled/unlabeled data to the classes. However, they usually require the labeled and unlabeled data strictly obey the same data distribution, which can hardly be satisfied in practical scenarios. Graph-based methods (Linmei et al., 2019; Li et al., 2021; Cui et al., 2022) attempt to construct an graph structure underlying the data, where each node represents a labeled/unlabeled instance and each edge represents a pairwise relation in certain similarity measurement. However, they construct such graphs upon all the data, inevitably requiring huge computational resources, which can be impractical on large-scale datasets. Pseudo-Labeling methods learn the model on the mixture of labeled data and unlabeled data (annotated with pseudo-labels), thus accessing to additional beneficial supervision. There exist two major practices: One line of the methods (Wang et al., 2021; Mukherjee and Awadallah, 2020; Lee et al., 2021; Tsai et al., 2022) is based on self-training strategy , which first trains a base model on labeled data, and then retrains the model on the unlabeled data annotated with pseudo-labels of high confidence. Another line of the methods (Najafi et al., 2019; Xie et al., 2020; Sohn et al., 2020) incorporates the idea of consistency learning, which adopts the unlabeled data to enhance model robustness with data perturbation, such as adversarial attack (Najafi et al., 2019) and data augmentation (Xie et al., 2020; Sohn et al., 2020; Chen et al., 2020). Although these methods provide potentiality and flexibility in practices, they still suffer from the underfitting near decision boundaries problem, and easily encounter learning bias from imbalanced data. In this paper, we develop the previous pseudo-labeling studies with prototype learning, which enhances low-density separation by exploiting prototypes to cluster text representations, and alleviates imbalanced class bias with prototype-guided pseudo labeling. ![2_image_0.png](2_image_0.png) ## 2.2 Prototypical Learning Prototypical learning (Bien and Tibshirani, 2011) aims to learn representative prototypes to summarize the instances in the same class, and is been widely used for unsupervised (Li et al., 2020; Pan et al., 2019) and semi-supervised learning (Snell et al., 2017; Arik and Pfister, 2020; Gu, 2020; Ren et al., 2018). In unsupervised scenarios, the most representative method is PCL (Li et al., 2020), which introduces a prototype as the center of mass of a cluster formed by similar samples and assigns each sample to multiple prototypes of different granularity. The goal of training is to bring each image embedding closer to its associated prototype. In semi-supervised scenarios, the prototypes can first be estimated with the labeled instances. Then, the pseudo-labels or classes of unlabeled data are obtained by calculating distances to the prototypes. Among these works, Gu (2020) attempts prototypebased pseudo-labeling for image classification but neglects the underfitting near the decision boundary and bias from the imbalanced data. We utilize prototypes to conduct low-density separation and data selection to alleviate both issues. ## 3 Problem Formulation In semi-supervised text classification, we are given a small set of labeled text data and a large set of unlabeled text data. Formally, let C be the set of all labels of interest. Dl = { (x l1 , yl1 ), *· · ·* ,(x lm, ylm) } is the set of pre-annotated m labeled text instances, where each x l i and y l i ∈ C represent a text instance and its corresponding label. Du = { x u 1 , · · · , xun } is the set of n unlabeled text data, where x u i ∈ Du denotes an unlabeled text instance. Semisupervised text classification hopes to leverage the limited annotated data and expand its knowledge to the unlabeled data for pseudo-labeling and include them as additional training data. ## 4 Methodology We propose a semi-supervised text classification model PGPL to tackle problems of underfitting near the decision boundary and bias from the imbalanced data. In this section, we first illustrate the overall model structure and introduce the fundamental SSTC structure with supervised and unsupervised losses, then describe the prototypeanchored contrasting module and finally explain the prototype-guided pseudo labeling module. ## 4.1 Model Structure The model structure of PGPL is illustrated in Figure 2. The labeled and unlabeled text instances are first input to the Encoder. Then, the labeled text embeddings are used to generate pseudo labels in Prototype Extraction module and encourage each instance closer to its categorical prototype and away from the other categorical prototypes via prototype-anchored contrasting. Finally, we perform prototype-guided pseudo labeling on the unlabeled data. The pseudo-labeled data is used to augment the training set and retrain the model. ## 4.2 Fundamental Sstc Framework 4.2.1 Backbone Text Classifier To utilize the power of pre-trained language models in SSTC, we follow the previous practice (Chen et al., 2020) that uses BERT (Devlin et al., 2019) as the text encoder. Specifically, let x represent either a labeled instance x l ∈ Dl or an unlabeled instance x u ∈ Du. The function f(x; θ) denotes the BERT encoding process with mean pooling operation over the token representations. Afterward, a two-layer MLP with tanh(·) activation is adopted to derive the relevance score of the input text x corresponding to each class: $$g(x,\phi_{y},\theta)=\mathrm{MLP}(f(x;\theta);\phi_{c}),$$ where ϕc is the MLP parameter corresponding to class c ∈ C. Note that this backbone text classifier can be trained by two objectives: 1) a supervised loss function Ls (in Eq. (3)) over the labeled instances; 2) an unsupervised loss function Lu (in Eq. (5)) over the unlabeled instances. ## 4.2.2 Annealing Supervised Loss Since the number of labeled instances is usually limited and costly in SSTC, we adopt the annealing supervised loss to fully exploit the labeled data. There is an existing work that introduces the training signal annealing (TSA) strategy (Xie et al., 2020), which releases labeled instances gradually during the training phase instead of observing them at once, to prevent the model from early over-fitting on the labeled data. Intuitively, in the initial training phase, we expect to learn about the data without overconfidence, in order not for the model to overvalue the incidental features of the data. Therefore, we employ a threshold ηt, and at training iteration t, we select the instances with the highest score of class confidence lower than the threshold ηt for loss calculation. The threshold ηtis derived as: $$\eta_{t}=\frac{t}{T}(1-\tau)+\tau,$$ where T is the total iterations of the training process, and τ is the initial threshold that is set to a scalar larger than 1 |Y| to all the classes. This strategy encourages the model to observe and learn the data with relatively lower confidence, and gradually adapt to perceive all training instances. Based on the threshold, we define the supervised loss at t iteration with cross-entropy: $$\mathcal{L}_{s}=-\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}(p_{y_{i}^{l}}<\eta_{t})\log\frac{\exp(g(x_{i}^{l},\phi_{y_{i}^{l}},\theta))}{\sum\limits_{c\in\mathcal{C}}\exp(g(x_{i}^{l},\phi_{c},\theta))},\tag{3}$$ where $p_{y_{i}^{l}}$ is the predicted probability of $y_{i}^{l}$ corre i corresponding to instance x l i , and I(·) is a binary indicator which equals 1 if py l i < ηt or 0 otherwise. ## 4.2.3 Selective Unsupervised Loss To leverage the unlabeled text data, the selective unsupervised loss module assigns pseudo-labels for each unlabeled text instance x u i ∈ Du to augment the data. Following recent works (Xie et al., 2020; Mukherjee and Awadallah, 2020; Sohn et al., 2020) in semi-supervised learning, the original unlabeled data is augmented with the pseudo-labeled data and used to optimize the text classifier. Specifically, we incorporate the augmented data x˜ u iof x u i with its pseudo label yˆ u iinto the new training dataset to train the classifier with crossentropy loss. Formally, for a unlabeled instance x u i , its pseudo label is determined by: $${\hat{y}}_{i}^{u}=\arg\operatorname*{max}_{c\in{\mathcal{C}}}\exp(g(x_{i}^{u},\phi_{c},\theta)),\qquad(4)$$ Among the unlabeled instances with pseudolabels, we further select high-quality pseudolabeled instances that are used in the next training phase by the following formula: $$\mathcal{L}_{u}=-\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(x_{i}^{u})\log\frac{\exp(g(\tilde{x_{i}^{u}},\phi_{y_{i}^{u}},\theta))}{\sum_{c\in\mathcal{C}}\exp(g(\tilde{x_{i}^{u}},\phi_{c},\theta))},\tag{5}$$ where the binary indicator $\mathbb{I}(\cdot)\in\{0,1\}$ is: I(x u i) = I(ˆy u i = c) ∧ I(d(f(x u j), zc) ≤ dc), (6) $$(2)$$ where ∧ represents the logic operation AND. d(·, ·) denotes the distance function that measures the distance between the text representation f(x u j ) and prototype zc of class c. dc serves as the threshold to determine the value in the binary indicator. The selected instances will be used in the next training phase as additional training data. | Dataset | Classification Type | Class | Train | Unlabeled | Dev | Test | |---------------|------------------------|---------|---------|-------------|-------|--------| | AG News | News Topic | 4 | 200 | 5000 | 2000 | 1900 | | DBpedia | Wikipedia Topic | 14 | 200 | 5000 | 2000 | 5000 | | Yahoo! Answer | QA Topic | 10 | 200 | 5000 | 2000 | 6000 | | IMDB | Movie Review Sentiment | 2 | 200 | 5000 | 2000 | 12500 | ## 4.3 Prototype-Anchored Contrasting Module To alleviate the underfitting near decision boundaries problem, inspired by the low-density separation hypothesis, the prototype-anchored cluster contrasting module (PAC) forces each text representation to be closer to its categorical prototype and far away from unrelated prototypes. As such, the number of text instances lie near the decision boundaries can be reduced. ## 4.3.1 Categorical Prototypes Extraction The prototype of a class is defined by the mean vector of text embeddings within the class. Concretely, we first obtain the text embedding f(x l i ) for each x l i ∈ Dl, and generate prototypes according to their classes. For each c ∈ C, the prototype zc is computed by: $$\begin{array}{c}{{z_{c}=\frac{1}{n_{c}}\sum_{y_{i}^{l}=c}f(x_{i}^{l}),}}\\ {{n_{c}=\sum_{y_{i}^{l}\in D_{l}}\mathbb{I}(y_{i}^{l}=c),}}\end{array}\qquad\qquad(7)$$ where nc is the instance number of class c. ## 4.3.2 Prototype-Anchored Contrasting After we obtain the prototype of each class, for a subset of labeled instances x l i ∈ Dl and unlabeled instances x u j ∈ Du, we expect the embedding f(x l i ) (resp. f(x u j )) to be closer to its prototype zy l i (resp. zyˆ u i ) and away from the others. For the unlabeled text instances, we migrate them within the representation space to make each cluster more distinct based on the prediction confidence. More specifically, a confidence score for each instance is calculated and is used as a weight value for determining how much it should be migrated. In other words, as the instances obtained low confidence are more likely to be assigned with incorrect pseudo-labels, how far the model allows the instances to migrate is proportional to confidence score. Thus, the loss function of this module can be formulated as: $$\mathcal{L}_{p}=-\frac{1}{m}\sum_{i}^{m}\sum_{c}\mathbb{I}(y_{i}^{l}=c)\log\frac{\exp\left(-\mathrm{d}(f(\tilde{x_{i}^{l}}),z_{c})\right)}{\sum_{k\in\mathcal{C}}\exp(-\mathrm{d}(f(\tilde{x_{i}^{l}}),z_{k}))}$$ $$-\frac{\lambda}{n}\sum_{j}^{n}\sum_{c}p_{\hat{y}_{j}^{u}}\mathbb{I}(\hat{y}_{j}^{u}=c)\log\frac{\exp\left(-\mathrm{d}(f(\tilde{x}_{j}^{u}),z_{c})\right)}{\sum_{k\in\mathcal{C}}\exp(-\mathrm{d}(f(\tilde{x}_{j}^{u}),z_{k}))},\tag{9}$$ where m and n are the instance number of labeled and unlabeled data, respectively. The function d(·, ·) is a distance metric function and λ is a weight factor. We use Euclidean distance for the distance metric between each instance and the prototype embedding, and use softmax normalization for the instances in the representation space. Here we also consider the pseudo-label confidence pyˆ u i (see Eq. (4)) to balance the training. ## 4.4 Prototype-Guided Pseudo Labeling Module As discussed in the introduction, the unlabeled instances are easily misguided by the dominant classes, inevitably leading to assigning incorrect pseudo-labels. To alleviate the bias from imbalanced data, we propose a prototype-guided pseudo labeling module (PGP). The idea is to select different k-nearest pseudo-labeled instances corresponding to the center (i.e., the prototype) for each class according to a short training history. To balance the training process, we expect not to select (or select fewer) pseudo-labeled instances for a class that has been over-trained in previous iterations. Thus, we record a statistic value µ c t during training, which returns the number of selected pseudo-labeled texts for class c at the t th iteration. For simplicity, we use µ c<t to denote the summation of µ c t′ (with t′ < t) from a training history of previous iterations. Then, at each iteration t, we can find the least trained class and compute its total number of selected pseudo-labeled texts during the training history by: $$\begin{array}{r l}{{\mu_{t}^{c}=\sum_{x_{j}^{u}\in B_{u}}\mathbb{I}({\hat{y}}_{j}=c),}}\\ {{\gamma_{t}=\arg\operatorname*{min}_{c\in\mathcal{C}}\mu_{<t}^{c}.}}\end{array}$$ <t. (11) We hope this least trained class utilizes more pseudo-labeled instances in the next training iteration than other classes to keep balanced training. To this end, we determine the number of nearest text instances kc for class c ∈ C according to: $$k_{c}=\begin{cases}\mu_{t}^{c}&\text{if}\mu_{<t}^{c}-\gamma_{t}=0\\ \mu_{t}^{c}-(\mu_{<t}^{c}-\gamma_{t})&\text{if}0\leq\mu_{<t}^{c}-\gamma_{t}<\mu_{t}^{c}\\ 0&\text{if}\mu_{<t}^{c}-\gamma_{t}\geq\mu_{t}^{c}\end{cases},\tag{12}$$ $$(13)^{\frac{1}{2}}$$ Here we encourage to select more pseudo-labeled instances for the less-trained classes, and fewer pseudo-labeled instances for the over-trained classes, thus alleviating the imbalanced training problem. Based on kc, we select the top-kc pseudolabeled instances near the categorical prototype of class c by Euclidean distance d(·, ·) as follows: $$d_{c}=\mathrm{TopK}(\mathrm{d}(f(x_{j}^{\tilde{u}}),z_{c}),k_{c}),$$ ), zc), kc), (13) where dc is the distance threshold used in Eq. (6) to select pseudo-labeled texts. By selecting knearest pseudo-labeled text instances around the corresponding categorical prototypes, the model obtains more reliable training data for SSTC. The overall objective L is combined with three losses, including the cross-entropy losses on the labeled and unlabeled data, and the prototypeanchored contrastive loss: $${\mathcal{L}}={\mathcal{L}}_{s}+\beta_{1}{\mathcal{L}}_{u}+\beta_{2}{\mathcal{L}}_{p},$$ where β1 and β2 are harmonic factors to balance the weight of the different losses and β1 is usually fixed to 1. ## 5 Experimental Setup 5.1 Dataset And Pre-Processing We conduct intensive experiments on four widely used text classification benchmark datasets: IMDB (Maas et al., 2011), AG News (Zhang et al., 2015), Yahoo! Answers (Chang et al., 2008), and DBpedia (Lehmann et al., 2015). We follow the original setting of test set splitting and randomly sample from the training set to form the labeled (10) $\binom{11}{2}$ (11) ... and unlabeled sets used for the training. The experimental results are averaged after five runs. The statistics and splitting information of the dataset are shown in Table 1. ## 5.2 Baselines To verify the effectiveness of our method, we compare our method to the several baselines: BERT (Devlin et al., 2019) adopts a pre-trained model for text classification, trained with only labeled data. Delta-training (Jo and Cinarel, 2019) distinguishes the instances with incorrect prediction and correct prediction by two separate classifiers. VAMPIRE (Gururangan et al., 2019) pretrains a variational autoencoder, and uses its internal states as features in a downstream classifier. UDA (Xie et al., 2020) makes soft predictions with original data to train the augmented data. MixText (Chen et al., 2020) makes predictions by mixing the original data with the augmented data. UST (Mukherjee and Awadallah, 2020) makes predictions by selecting pseudo labels with the same prediction multiple times. SALNet (Lee et al., 2021) makes predictions with the help of categorical dictionary. FLiText (Liu et al., 2021) adopts A lighter framework with convolution networks, predicted pseudolabels through knowledge distillation. CEST (Tsai et al., 2022) proposes a certaintydriven sample selection method and a contrastenhanced similarity graph in self-training. SAT (Chen et al., 2022) trains a meta-learner to predict the labels with weakly-augmented data. For fairness, we implement BERT, UDA, MixText with the code provided by Chen et al. (2020) in our experiments. For other baselines, we use the reported results provided from the original research paper to avoid re-implementation bias. $$(14)^{\frac{1}{2}}$$ ## 5.3 Implementation We used PyTorch1for implementation. For all compared models, the maximum sentence length is set to 256. For sentences that exceed the limit, we keep the first 256 tags. All hyper-paramters are selected by grid search on the validation set. We choose 1e-5 as the learning rate for the BERT encoder and 1e-3 for the MLP. For the unlabeled data, we choose German as the intermediate language $${}^{1}\mathrm{{https://pytorch.org/}}$$ | Model | AG News | IMDB | Yahoo! Answer | DBpedia | | | | | | | | | |---------|-----------|--------|-----------------|-----------|------|------|------|------|------|------|------|------| | 10 | 30 | 200 | 10 | 30 | 200 | 10 | 30 | 200 | 10 | 30 | 200 | | | Bert | 81.0 | 84.3 | 87.2 | 70.6 | 73.3 | 86.1 | 60.1 | 64.1 | 69.3 | 96.6 | 98.2 | 98.6 | | UDA | 86.4 | 86.4 | 88.3 | 86.4 | 86.4 | 88.7 | 64.3 | 68.3 | 70.2 | 97.8 | 98.3 | 98.8 | | Mixtext | 87.3 | 87.4 | 88.2 | 74.2 | 85.3 | 89.1 | 67.7 | 68.5 | 70.6 | 98.5 | 98.8 | 98.9 | | PGPL | 87.8 | 88.5 | 89.2 | 88.9 | 90.2 | 90.3 | 67.4 | 69.1 | 70.7 | 98.7 | 99.0 | 99.0 | for back translation augmentation using FairSeq22, with a random sampling temperature of 0.9. We use AdamW (Loshchilov and Hutter, 2018) to optimize model parameters. The train epoch number is set to 20. In the training process, we choose 16 unlabeled samples for each batch. And for the labeled data, the number of labeled data is the number of classes of each dataset in each batch. The factor λ is set to 0.2 in IMDB and AG News and 0.5 in Yahoo!Answer and DBpedia. The history t iteration is set to 5. The harmonic factor β1 and β2 are set to 1 and 0.5, respectively. ## 5.4 Overall Results We first compare our model with the BERT, UDA and Mixtext. We run these models with the same setting of our model. The training set consists of 5000 unlabeled data and respectively 10, 30, and 200 labeled data for each class. Table 2 shows the results on 5000 unlabeled data and different amount of labeled data. The results show that our method is superior to all baselines. Especially when there are only 10 labeled instances for each class, our model outperforms by 6.8% for AG News, 18.3% for IMDB, and 7.3% for Yahoo Answers, respectively. On DBpedia, due to the performance of the initially trained classifier being already outstanding (98% as shown in the table), the performance improvement is limited. Compared to the state-of-the-art methods, our method outperforms under the settings of different numbers of labeled data in AG News, IMDB, and DBpedia. On Yahoo! Answer, it also performs better than the state-of-the-art methods except under 10 labeled data. These results verify the effectiveness of our PGPL model for the SSTC task. As some models use a different number of labeled and unlabeled texts during training, we con2https://github.com/pytorch/fairseq | Datasets | Model | K | Acc. | |----------------|---------|------|--------| | PGPL | 10 | 87.8 | | | SAT | 20 | 86.4 | | | CEST | 30 | 87.1 | | | UST | 30 | 87.7 | | | VAMPIRE | 200 | 83.9 | | | AG News | PGPL | 30 | 90.2 | | SAT | 10 | 69.0 | | | CEST | 30 | 90.2 | | | SALNet | 21 | 75.7 | | | VAMPIRE | 200 | 82.2 | | | Delta-training | 212 | 75.0 | | | IMDB | PGPL | 10 | 67.4 | | SAT | 20 | 61.5 | | | SALNet | 34 | 53.7 | | | VAMPIRE | 200 | 59.9 | | | FLiText | 500 | 65.1 | | | Yahoo! Answer | PGPL | 10 | 98.7 | | SALNet | 20 | 98.2 | | | UST | 30 | 98.6 | | | CEST | 30 | 98.6 | | | DBpedia | | | | duct experiments using their original settings and compare them as in Table 3. The compared models includes UST (Mukherjee and Awadallah, 2020), VAMPIRE (Gururangan et al., 2019), CEST (Tsai et al., 2022), SALNet (Lee et al., 2021), FLiText (Liu et al., 2021), SAT (Chen et al., 2022) and Delta-training (Jo and Cinarel, 2019). For consistency, we take the results from the original papers and round some of the results to one decimal place. The results show that our model outperforms other SOTA models under the same experimental setting keeps its superiority when applied to different experimental settings. ## 5.5 Impact Of Labeled And Unlabeled Data Distribution In order to evaluate our results more accurately, we performed four experiments with different constraints for further comparison: (a) keep the original proportion of data with balanced labeled data balance and imbalanced unlabeled data; (b) keep the original proportion of balanced unlabeled data, but perform a random imbalanced operation on labeled data; (c) keep the original proportion of balanced labeled data, but perform a random imbalanced operation on unlabeled data; (d) perform a random imbalanced operation on labeled data and unlabeled data. The *balance* here means that the number of data in each class is the same. In the imbalanced setting, for convenience, the number of half classes is twice that of the other half. The experimental result is averaged after three runs on the verification set. | Data setting (labeled/unlabeled) | Model | Result | |------------------------------------|---------|----------| | UDA | 64.3 | | | balanced / balanced | Mixtext | 68.1 | | PGPL | 68.3 | | | UDA | 64.8 | | | imbalanced / balanced | Mixtext | 67.7 | | PGPL | 68.8 | | | UDA | 57.9 | | | balanced / imbalanced | Mixtext | 67.6 | | PGPL | 67.8 | | | UDA | 64.1 | | | imbalanced / imbalanced | Mixtext | 65.5 | | PGPL | 67.3 | | Table 4: Experimental results on different data balance settings, grouped by labeled/unlabeled data. From Table 4, we find UDA show the worst result under the condition (c) and Mixtext under the condition (d) that have inconsistent distribution proportions though the two baselines have simliar results in balanced condition. Our model shows consistently good results in the four settings. Especially when only unlabeled data is imbalanced, the result is improved from 57.9% to 67.8% compared with UDA. In summary, under four different data distribution settings, our model is superior to other baselines, indicating that our model is more capable of complex data distribution scenarios. ## 5.6 Qualitative Evaluation To investigate whether the PAC module helps to separate the clusters of different classes, we compare the visualized embeddings of full PGPL and PGPL without PAC model in Figure 3. ![7_image_0.png](7_image_0.png) It is observed from the visualized results that when PGPL is implemented without PAC module (e.g., Figure 3 (a)), the boundaries of different clusters are unclear. Concretely, the clusters of different classes are separated to some extent, but not as distinct as those of full PGPL with PAC implemented. This suggests that the PAC module successfully separates the clusters of different classes and reduces the chance of data instances lying near the class boundaries. ## 5.7 Class Significance Evaluation It is observed from Table 5 that PGPL with PGP applied improves the performance on all individual classes, especially in World class and Sports class. ## 5.8 Ablation Study In order to investigate the impact of each component in the model, we conduct variants on PGPL. Table 6 shows the results of PGPL by removing each component at a time, averaged after three runs on valid set with 10 labeled data and 5,000 unlabeled data per class. It is observed that both the PAC and PGP individually improve the final model performance, and the combination of PAC and PGP achieves the best performance. TSA module is also advantageous for improving the performance, while applying to DBpedia is an exception because its initial classifier is already outstanding as explained previously. This confirms the effectiveness of our proposed approach. | Class | Model | Accuracy | |----------------|---------|------------| | World | PGPL | 91.2 | | -w/o PGP | 88.6 | | | Sports | PGPL | 84.0 | | -w/o PGP | 81.1 | | | Business | PGPL | 96.1 | | -w/o PGP | 96.0 | | | Sci/Tech | PGPL | 81.9 | | -w/o PGP | 80.3 | | | All categories | PGPL | 88.3 | | -w/o PGP | 86.5 | | | Data | AG News | IMDB | |---------|--------------|---------| | PGPL | 88.3 | 89.7 | | w/o PGP | 86.5 | 88.2 | | w/o PAC | 86.2 | 88.9 | | w/o TSA | 87.2 | 87.9 | | Data | Yahoo!Answer | DBpedia | | PGPL | 68.3 | 98.4 | | w/o PGP | 65.7 | 98.2 | | w/o PAC | 67.4 | 98.2 | | w/o TSA | 68.0 | 98.6 | ## 5.9 Stability Evalution To investigate the efficiency of different models, we illustrate their training progresses on AG News and Yahoo! Answer datasets. From the results on the AG News dataset shown in Figure 4 (a), we observe that the model w/o PGP performs very poorly at the early stage of training but converges at about the 40th step. That is, the model w/o PAC shows a good start but continues fluctuating, and it cannot reach the convergence even after 100 training steps. The performance at early stage of PGPL is good and the training progress is also stable. The training process on Yahoo! Answer dataset shown in Figure 4 (b) shows the similar tendency with the results in Figure 4 (a), except that the model w/o PGP and the full PGPL model show similar training stability and convergence speed. These results suggest that the Prototypical Cluster Separation module is an indispensable component in keeping the training stable and guaranteeing the convergence speed of the PGPL model. They also indicate that including both PAC and PGP modules not only improves the test accuracy of the model but also improves its training stability and training speed. Lastly, we conclude that including PAC in the early training stage provides more reliable pseudo-labeled texts and helps the latter training stage be more stable. ![8_image_0.png](8_image_0.png) ## 6 Conclusion In this paper, we propose a semi-supervised model PGPL for SSTC task that applies PAC and PGP strategies. Specifically, PAC constructs prototypes to cluster text representations belongs to the same class by forcing them to be high-density distributed, thus alleviating the underfitting near decision boundary problem. PGP selects reliable pseudo-labeled data nearby prototypes to address the training bias from the imbalanced data. Experimental results demonstrate PGPL is effective and outperforms previous state-of-the-art methods. ## Limitations Although PGPL is proven to be effective according to our intensive experiments, the current design of the PGP module may not be optimal and could be improved in the future. Specifically, on the one hand, to choose the proper number of nearest text instances kc for each class c, we need to obtain the statistic values µ y t , µ y <t and computed values µ c t , µ c<t, γt for each dataset, which makes it somewhat not easy to scale to many different datasets. On the other hand, we heuristically filter pseudo-labeled texts by the top-k selection, which may also be improved with more systematical approach. ## Acknowledgements This work is supported partly by the National Key R&D Program of China under Grant 2021ZD0110700, partly by the National Natural Science Foundation of China (No.62106059), partly by the Fundamental Research Funds for the Central Universities, and partly by the State Key Laboratory of Software Development Environment. ## References Sercan Ö Arik and Tomas Pfister. 2020. Protoattend: Attention-based prototypical learning. *The Journal* of Machine Learning Research, 21(1):8691–8725. Jacob Bien and Robert Tibshirani. 2011. Prototype selection for interpretable classification. *The Annals* of Applied Statistics, 5(4):2403–2424. Ming-Wei Chang, Lev-Arie Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In *Aaai*, volume 2, pages 830–835. Olivier Chapelle and Alexander Zien. 2005. Semisupervised classification by low density separation. In *International workshop on artificial intelligence* and statistics, pages 57–64. PMLR. Hui Chen, Wei Han, and Soujanya Poria. 2022. SAT: Improving semi-supervised text classification with simple instance-adaptive self-training. In *Findings* of the Association for Computational Linguistics: EMNLP 2022, pages 6141–6146, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. MixText: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2147– 2157, Online. Association for Computational Linguistics. Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2019. Kernel-based generative adversarial networks for weakly supervised learning. In *International Conference of the Italian Association for Artificial Intelligence*, pages 336–347. Springer. Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2020. GAN-BERT: Generative adversarial learning for robust text classification with a bunch of labeled examples. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2114–2119, Online. Association for Computational Linguistics. Hongyan Cui, Gangkun Wang, Yuanxin Li, and Roy E Welsch. 2022. Self-training method based on gcn for semi-supervised short text classification. *Information* Sciences, 611:18–29. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Xiaowei Gu. 2020. A self-training hierarchical prototype-based approach for semi-supervised classification. *Information Sciences*, 535:204–224. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5880–5894, Florence, Italy. Association for Computational Linguistics. Hwiyeol Jo and Ceyda Cinarel. 2019. Delta-training: Simple semi-supervised text classification using pretrained word embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3458–3463, Hong Kong, China. Association for Computational Linguistics. Ju-Hyoung Lee, Sang-Ki Ko, and Yo-Sub Han. 2021. Salnet: Semi-supervised few-shot text classification with attention-based lexicon construction. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pages 13189–13197. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167–195. Chen Li, Xutan Peng, Hao Peng, Jianxin Li, and Lihong Wang. 2021. Textgtl: Graph-based transductive learning for semi-supervised text classification via structure-sensitive interpolation. In *IJCAI*, pages 2680–2686. Junnan Li, Pan Zhou, Caiming Xiong, and Steven CH Hoi. 2020. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966. Shujie Li, Min Yang, Chengming Li, and Ruifeng Xu. 2022. Dual pseudo supervision for semi-supervised text classification with a reliable teacher. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information* Retrieval, pages 2513–2518. Hu Linmei, Tianchi Yang, Chuan Shi, Houye Ji, and Xiaoli Li. 2019. Heterogeneous graph attention networks for semi-supervised short text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4821– 4830, Hong Kong, China. Association for Computational Linguistics. Chen Liu, Zhang Mengchao, Fu Zhibing, Panpan Hou, and Yu Li. 2021. FLiText: A faster and lighter semisupervised text classification with convolution networks. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 2481–2491, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *International Conference on Learning Representations*. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150. Subhabrata Mukherjee and Ahmed Hassan Awadallah. 2020. Uncertainty-aware self-training for text classification with few labels. arXiv preprint arXiv:2006.15315. Amir Najafi, Shin-ichi Maeda, Masanori Koyama, and Takeru Miyato. 2019. Robustness to adversarial perturbations in learning from incomplete data. *Advances in Neural Information Processing Systems*, 32. Yingwei Pan, Ting Yao, Yehao Li, Yu Wang, ChongWah Ngo, and Tao Mei. 2019. Transferrable prototypical networks for unsupervised domain adaptation. CoRR, abs/1904.11227. Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. 2018. Metalearning for semi-supervised few-shot classification. CoRR, abs/1803.00676. Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. *Advances in neural information processing systems*, 30. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in Neural Information Processing Systems, 33:596–608. Austin Cheng-Yun Tsai, Sheng-Ya Lin, and Li-Chen Fu. 2022. Contrast-enhanced semi-supervised text classification with few labels. Ximei Wang, Jinghan Gao, Mingsheng Long, and Jianmin Wang. 2021. Self-tuning for data-efficient deep learning. In International Conference on Machine Learning, pages 10738–10748. PMLR. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. *Advances in Neural* Information Processing Systems, 33. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28:649–657. | Dataset | Supervised (all labels) | Supervised (10 labels) | Semi-supervised (10 labels) | | | | |--------------|---------------------------|--------------------------|-------------------------------|------------|---------------|------| | BERT | RoBERTa | BERT | RoBERTa | PGPL(BERT) | PGPL(RoBERTa) | | | AG News | 91.2 | 92.4 | 80.2 | 80.7 | 87.8 | 88.4 | | IMDB | 90.4 | 93.5 | 70.9 | 71.2 | 88.9 | 91.2 | | Yahoo!Answer | 73.7 | 74.2 | 60.1 | 61.0 | 67.4 | 67.8 | | DbPedia | 99.1 | 99.1 | 96.6 | 96.1 | 98.7 | 98.8 | | Average | 88.6 | 89.8 | 76.9 | 77.3 | 85.7 | 86.6 | Table 7: Comparison with state-of-the-arts on the AG News, DBpedia, Yahoo! Answer and IMDB test set under different partition protocols. The results are averaged after three runs. ## A Hyper-Parameter Settings For reproduction, we report our hyper-parameter settings in Table 8. The initial learning rate is tuned in [1e−6, 1e−5] for BERT parameters and [1e−4, 1e−3] for other parameters. The threshold β2 and threshold λ are tuned in [0.1, 1]. Note that the hyper-parameter settings are tuned on the validation data by grid search with 3 trials. | Hyper-parameter | PGPL | |-----------------------------------|---------------| | type embedding dimension d | 768 | | Bert attention dropout | 0.1 | | Bert hidden dropout | 0.1 | | MLP hidden dimension | 128 | | Sequence Length | 256 | | batch size on labeled data | class numbers | | batch size on unlabeled data | 16 | | training epoch | 20 | | initial learning rate of BERT | 1e −5 | | learning rate of MLP | 1e −3 | | threshold λ (AG News,IMDB) | 0.2 | | threshold λ(DBpedia,Yahoo!Answer) | 0.5 | | threshold β1 | 1 | | threshold β2 | 0.5 | | t memory set | 5 | Table 8: Hyper-parameter settings of PGPL. ## B Evaluation Results With Other Pre-Trained Models In order to evaluate our model on different pretraining language models, we conduct three different experiments on supervised and semi-supervised settings: (1) supervised learning with all labels; (2) supervised learning with 10 labels; (3) and seimisupervised with 10 labels. The experimental results are shown in Table 7. It is observed that all results with RoBERTa is better that those with BERT. In semi-supervised setting, PGPL is better than supervised setting with 10 available labels, and also not far behind the results of full supervised setting with all labels. In other words, PGPL is very efficient when exploiting few labeled data and achieves almost similar results to the supervised setting that fully exploit all labeled data. This indicates that PGPL is more practical in real-world scenarios where annotated data is limited. ## C Evaluation Results When Varying Number Of Unlabeled Data Accuracy(%) ![11_image_0.png](11_image_0.png) Figure 5: The accuracy of PGPL when varying the number of unlabeled data on AG News and Yahoo!Answer Datasets. We also conduct experiments to evaluate our model performances with fixed number of labeled data and different number of unlabeled data. Specifically, the former uses 10 labels and the latter uses between 2,000 and 10,000 labels on AG News and Yahoo! Answer. The results in Figure 5 show that with more unlabeled data, the accuracy increases significantly on both datasets, which further verify the effectiveness of utilizing unlabeled data. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: A1. Did you describe the limitations of your work? Left blank. A2. Did you discuss any potential risks of your work? Left blank. A3. Do the abstract and introduction summarize the paper's main claims? Left blank. A4. Have you used AI writing assistants when working on this paper? Left blank. ## B **Did You Use Or Create Scientific Artifacts?** Left blank. B1. Did you cite the creators of artifacts you used? Left blank. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Left blank. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Left blank. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Left blank. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank. ## C **Did You Run Computational Experiments?** Left blank. C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank. C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank. C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank. D **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
maddela-etal-2023-lens
{LENS}: A Learnable Evaluation Metric for Text Simplification
https://aclanthology.org/2023.acl-long.905
Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation. However, existing human evaluation datasets for text simplification have limited annotations that are based on unitary or outdated models, making them unsuitable for this approach. To address these issues, we introduce the SimpEval corpus that contains: SimpEval{\_}past, comprising 12K human ratings on 2.4K simplifications of 24 past systems, and SimpEval{\_}2022, a challenging simplification benchmark consisting of over 1K human ratings of 360 simplifications including GPT-3.5 generated text. Training on SimpEval, we present LENS, a Learnable Evaluation Metric for Text Simplification. Extensive empirical results show that LENS correlates much better with human judgment than existing metrics, paving the way for future progress in the evaluation of text simplification. We also introduce Rank {\&} Rate, a human evaluation framework that rates simplifications from several models in a list-wise manner using an interactive interface, which ensures both consistency and accuracy in the evaluation process and is used to create the SimpEval datasets.
# Lens **: A Learnable Evaluation Metric For Text Simplification** Mounica Maddela∗ , Yao Dou∗ , David Heineman, Wei Xu School of Interactive Computing Georgia Institute of Technology {mmaddela3, douy, david.heineman}@gatech.edu; wei.xu@cc.gatech.edu http://lens-score.com/ ## Abstract Training learnable metrics using modern language models has recently emerged as a promising method for the automatic evaluation of machine translation. However, existing human evaluation datasets for text simplification have limited annotations that are based on unitary or outdated models, making them unsuitable for this approach. To address these issues, we introduce the SIMPEVAL corpus that contains: SIMPEVALPAST, comprising 12K human ratings on 2.4K simplifications of 24 past systems, and SIMPEVAL2022, a challenging simplification benchmark consisting of over 1K human ratings of 360 simplifications including GPT3.5 generated text. Training on SIMPEVAL, we present LENS, a Learnable Evaluation Metric for Text Simplification. Extensive empirical results show that LENS correlates much better with human judgment than existing metrics, paving the way for future progress in the evaluation of text simplification. We also introduce RANK & RATE, a human evaluation framework that rates simplifications from several models in a list-wise manner using an interactive interface, which ensures both consistency and accuracy in the evaluation process and is used to create the SIMPEVAL datasets. ## 1 Introduction Text simplification is a text-to-text generation task that aims to make a text easier to read while preserving its original meaning (Saggion, 2017). Automatic evaluation of text simplification is challenging because a sentence can be simplified in many ways, such as paraphrasing complex words, deleting insignificant information, and splitting long sentences into shorter ones. An ideal automatic metric should accommodate these diverse choices while capturing semantic similarity and fluency. However, existing metrics such as SARI (Xu et al., 2016) and BERTScore (Zhang et al., 2020) struggle ∗Equal contribution. New Benchmark!New Benchmark! Original Sentence from our SimpEvaL2022: ![0_image_0.png](0_image_0.png) The book is illustrated with more than 200 newly commissioned color ![0_image_1.png](0_image_1.png) Figure 1: Automatic metric and human evaluation scores on simplifications of a complex sentence from our SIMPEVAL2022. LENS achieves the best correlation with humans. SARI penalizes simplifications that deviate from the reference (first two examples), and BERTScore fails to penalize hallucinations with high lexical overlap (third example). to capture all the aspects and achieve a high correlation with human evaluation (Alva-Manchego et al., 2021) (see Figure 1). These metrics fail even more when evaluating high-quality systems that have close performance, calling for a more robust and accurate metric for text simplification. Prior work on machine translation (Rei et al., 2020; Sellam et al., 2020) has seen the success of using language models as an automatic metric by training them on human judgments, such as Direct Assessment (DA) that rates generations on a 0-100 scale. However, existing human evaluation datasets (Alva-Manchego et al., 2021; Sulem et al., 2018) for text simplification are not suitable for training metrics because they include a limited number of annotations or systems. Besides, these datasets do not include state-of-the-art generation models such 16383 as GPT-3.5 (Ouyang et al., 2022) that have been shown to generate human-like quality text (Dou et al., 2022; Goyal et al., 2022). In this work, we introduce LENS, a Learnable Evaluation Metric for Text Simplification. LENS is the first supervised metric for the task and uses an adaptive ranking loss to promote fair comparison of simplifications that have undergone different edits (i.e., splitting, paraphrasing, and deletion). To train LENS, we collect 12K human judgments on 2.4K simplifications by 24 simplification systems from the literature, which we name as the SIMPEVALPAST dataset. We also create SIMPE-VAL2022 to evaluate LENS and other metrics in a realistic and challenging setting of assessing simplifications by state-of-the-art language models on the more recent, complex, and longer sentences published on Wikipedia after Oct 22nd, 2022. SIM-PEVAL2022 contains over 1K human ratings on 360 simplifications generated by 4 SOTA models, including GPT-3.5, and 2 humans. Empirical experiments show that LENS achieves a higher correlation of 0.331 with human ratings on SIMPEVAL2022, which is more than twice as high as the correlation scores of 0.112 and 0.149 by BERTScore and SARI, respectively. We further demonstrate that incorporating LENS into decoding process, using minimum Bayes risk framework (Fernandes et al., 2022), can directly improve the automatic text simplification system's performance. We expect that our data collection method, including RANK & RATE, a list-wise human evaluation interface, can be easily adapted to other text generation tasks. ## 2 Background Issues of Existing Automatic Metrics. SARI (Xu et al., 2016) is the most commonly used metric for text simplification that computes F1/precision scores of the n-grams inserted, deleted, and kept when compared to human references. As SARI measures n-gram string overlap, it penalizes simplifications that are synonymous to the reference but uses different words (see first two examples in Figure 1). Alva-Manchego et al. (2021) showed that BERTScore (Zhang et al., 2020), which measures similarity based on BERT (Devlin et al., 2019) embeddings, is better at capturing semantic similarity between the simplification system's outputs and the references. However, it fails to penalize conservative systems that make trivial or no changes to the ![1_image_0.png](1_image_0.png) input as illustrated in Figure 2. This figure shows average metric scores by SARI and BERTScore for a conservative system that copies the input (*Copy*), an oracle system of human simplification (*Human*) 1, and three disfluent corrupted systems that drop 10% of the input words (*Drop*), scramble 5% of the input words (*Scramble*), or insert a period in the middle (*Split*). BERTScore ranks some conservative or corrupted systems above human simplifications. SARI penalizes the conservative system because it focuses on the edits performed by the system but ranks the disfluent systems above the conservative system. An ideal automatic metric for text simplification should capture both semantic similarity and the edits performed by the system. Lack of Human Evaluation Data. There is a lack of suitable human evaluation datasets for finetuning pre-trained language models for simplification evaluation. Alva-Manchego et al. (2021) released 0-100 continuous scale ratings on fluency, meaning preservation, and simplicity for only 600 simplifications sampled from six systems. Sulem et al. (2018) released 5-point Likert scale ratings on the same three dimensions for 1,960 simplifications generated by different configurations of six systems. Both datasets are relatively small and cover too few different system designs. For example, they do not include the newer state-of-the-art systems (Sheang and Saggion, 2021; Martin et al., 2022) that are based on T5 (Raffel et al., 2020) and other large pre-trained language models. ## 3 Automatic Evaluation Metric To tackle the challenge of limited human evaluation data, we curate SIMPEVAL, a corpus containing over 13K human judgements on 2.8K simplification from 26 systems. This facilitates the training and evaluation of LENS (A Learnable Evaluation Metric for Text Simplification), the first supervised automatic metric for text simplification evaluation. In this section, we first describe the creation of SIMPEVAL datasets in §3.1 and then LENS in §3.2. ## 3.1 Collecting Human Judgements We collect SIMPEVALPAST, containing 12K human ratings on 2.4K simplifications from 24 systems on sentences from TurkCorpus (Xu et al., 2016), to train LENS. For evaluating LENS and other simplification metrics, we create SIMPEVAL2022 that consists of 1,080 human ratings on 360 simplifications from both humans and SOTA models, including GPT-3.5. It features more complex sentences from Wikipedia written after Oct 22nd, 2022, very recent to the time we conduct the experiments, to reduce the risk of "data contamination" (i.e., appearing in the training data of LLMs) and serve as a more challenging test bed for large language models. Table 1 shows the summary of both datasets. ## A Diverse Set Of Simplification Systems. We consider the following systems (further details in Appendix C): (i) two GPT-3.52 outputs under zero-shot and 5-shot settings; (ii) eight fine-tuned Transformer-based systems of varied sizes and parameter settings (Sheang and Saggion, 2021; Raffel et al., 2020; Martin et al., 2020; Maddela et al., 2021); (iii) three supervised BiLSTM-based systems that use vanilla RNN, reinforcement learning (Zhang and Lapata, 2017), or explicit editing (Dong et al., 2019); (iv) one unsupervised and one semisupervised system utilizing auto-encoders (Surya et al., 2019); (v) two systems that apply statistical machine translation approaches to simplification (Wubben et al., 2012; Xu et al., 2016); (vi) a rule-based system (Kumar et al., 2020); (vii) three hybrid systems that combine linguistic rules with data-driven methods (Narayan and Gardent, 2014; Sulem, 2018; Maddela et al., 2021); (viii) two naive baselines that copy the input or scramble 5% of the input words; and (ix) six human-written simplifications, including two from ASSET (Alva-Manchego 2Specifically, we use the text-davinci-003 model which is the most recent variant with 175B parameters. We provide the prompts we used in Appendix D.2. | System | Arch. | Data | Human Avg. Raw Z-Score | | |--------------------------------------------------------------------------------------|--------------|-------------|--------------------------|--------| | SIMPEVALPAST (24 systems on 100 original sentences) Human-1 (2020) Human ASSET 86.69 | 0.783 | | | | | Human-2 (2020) | Human | ASSET | 86.12 | 0.711 | | MUSS (2022) | BART-large | WikiLarge | 84.48 | 0.653 | | + Mined Data | | | | | | ControlT5 (2021) | T5-base | WikiLarge | 84.70 | 0.650 | | T5-3B (2020) | T5 | WikiAuto | 82.79 | 0.492 | | T5-large | T5 | WikiAuto | 81.86 | 0.453 | | T5-base | T5 | WikiAuto | 82.15 | 0.443 | | Transformer (2017) | BERT-TF | WikiAuto | 79.42 | 0.366 | | Controllable (2021) | BERT-TF | WikiAuto | 79.44 | 0.323 | | Human-3 (2016) | Human | TurkCorpus | 79.54 | 0.281 | | Human-4 | Human | Simple Wiki | 78.36 | 0.249 | | DRESS (2017) | BiLSTM+RL | WikiLarge | 77.18 | 0.206 | | Copy | - | - | 76.81 | 0.103 | | ACCESS (2020) | Transformer | WikiLarge | 73.25 | 0.001 | | SBMT-SARI (2016) Statistic MT | PWKP | 73.66 | -0.014 | | | PBMT-R (2012) | Statistic MT | PWKP | 72.44 | -0.066 | | EditNTS (2019) | BiLSTM | WikiLarge | 70.15 | -0.162 | | BiLSTM | BiLSTM | WikiLarge | 68.35 | -0.245 | | SEMosses (2018) | LSTM | WikiLarge | 62.84 | -0.565 | | UNTS (2019) | RNN | Wiki dump | 62.66 | -0.596 | | UNMT (2018) | RNN | Wiki dump | 60.43 | -0.673 | | Rule-based (2020) | - | - | 60.15 | -0.687 | | Hybrid (2014) | Statistic MT | PWKP | 55.36 | -0.925 | | Scramble | - | - | 35.17 | -1.954 | | SIMPEVAL2022 (6 systems on 60 original sentences) Human-5 Human Ours 81.87 | 0.424 | | | | | Human-6 | Human | Ours | 81.57 | 0.395 | | GPT-3.5 (2022) | InstructGPT | 5-shot | 79.59 | 0.280 | | GPT-3.5 (2022) | InstructGPT | 0-shot | 75.27 | -0.025 | | MUSS (2022) | BART-large | See above | 70.74 | -0.333 | | T5-3B (2020) | T5 | WikiAuto | 64.98 | -0.700 | Table 1: SIMPEVALPAST and SIMPEVAL2022 datasets of human evaluation data that covers a wide range of simplification systems. MUSS is the best-performing model in SIMPEVALPAST with a small gap to humans but is much worse on the more challenging sentences in SIM-PEVAL2022 where GPT-3.5 performs better. BERT-TF: BERT-base initialized Transformer. Z-scores (Graham et al., 2013; Akhbardeh et al., 2021) are standardized based on each rater's mean and standard deviation. et al., 2020), one from TurkCorpus (Xu et al., 2016), one from Simple Wikipedia that were automatically aligned (Kauchak, 2013), and two newly written by our trained in-house annotators. Complex Sentences Selection. For SIMPE-VALPAST, we sample 100 complex sentences from the test set of the widely-used TurkCorpus (Xu et al., 2016) and ASSET (Alva-Manchego et al., 2020) evaluation benchmarks, which share the same set of complex sentences but have different human simplifications in terms of edit operations. ASSET contains 10 human references for each complex sentence that are used to train LENS. GPT is pre-trained on a vast amount of web text, and the sentences in TurkCorpus/ASSET are derived from the 12-year-old Parallel Wikipedia Simplification corpus (Zhu et al., 2010), which may have already been encountered by GPT-3.5. To address this "data contamination" issue and provide a more challenging benchmark for state-of-the-art models, we manually select 60 long, complex sentences covering recent world events such as the Twitter acquisition and World Cup for SIMPEVAL2022. 3 These sentences are from new revisions or articles added to Wikipedia between October 22nd and November 24th, 2022. They have an average length of 38.5 tokens, much longer than the 19.7-token average of TurkCorpus and ASSET. Data Annotation. We ask in-house annotators to rate each simplification on a single 0-100 overall quality scale using our RANK & RATE framework (more details in §6). We provided an extensive tutorial and two training sessions to the annotators, which involved rating system outputs for five input sentences (screenshots in Appendix F). We periodically inspected the annotations to prevent the deterioration of quality over time. All annotators are English speakers who are university students. Each simplification in SIMPEVALPAST receives 5 ratings, while each simplification in SIMPEVAL2022 is rated by 3 annotators. We follow WMT (Akhbardeh et al., 2021) to normalize the raw scores by the mean and standard deviation of each rater, also known as the z-score, which are later used to train LENS. The inter-annotator agreement for SIMPE-VALPAST is 0.70 using the interval Krippendorff's α on z-scores, and 0.32 for SIMPEVAL2022, partly because it contains GPT-3.5 outputs that are quite competitive with human. Both are considered fair to good agreement (Krippendorff, 2004). ## 3.2 A New Learnable Metric - Lens Given an input text c, the corresponding system output s, and a set of n references R = {r1, r2*, . . . , r*n}, LENS produces a real-valued score zmax = max1≤i≤n(zi) that maximizes over the quality scores zi of s in regards to each reference ri. Our model encodes all texts into vectors (c, s, ri) using Transformer-based encoders such as RoBERTa (Liu et al., 2019), then combines them into an intermediate representation H = -s; ri; s ⊙ c; s ⊙ ri; |s − c|; |s − ri| by concatenation ([; ]) and element-wise product (⊙), which is then fed to a feedforward network to predict zi. For training, besides considering all references equally (i.e., LENSall when k = n in Eq. (1)), we also adopt a reference-adaptive loss that selects a subset of references closer to s in terms of edit operations rather than the entire set R. It encourages the metric to consider that different simplifications (e.g., paraphrasing-focused, deletion-focused, with or without splitting) can be acceptable, as long as they are close to some (not necessarily all) of the human references. We compute this loss (L*adapt*) as: $$L_{a d a p t}=\frac{1}{k m}\sum_{j=1}^{m}\sum_{z_{l}\in Z^{\prime}}(h_{j}-z_{l})^{2}\qquad(1)$$ where h is human rating and m is the training data size. We compute the set of predicted scores Z = {z1, zi*, . . . , z*n} corresponding to references in R and then choose top k (k ≤ n) values from Z to form a subset Z′ ⊆ Z. Finally, we calculate the mean squared error (MSE) between the human rating h and each score in Z′. By selecting top k scores in Z, we focus the training of metric on references similar to s in terms of writing style or editing operations. This loss also aligns the training step with the inference step, where we select the best-predicted score zmax corresponding to R. Although multiple references are ideal, the proposed loss can also use a single reference, similar to the standard MSE loss. We train LENS on our SIM-PEVALPAST dataset (details in §3.1). We provide further implementation details in Appendix A. Similar to the COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020) metrics for MT evaluation, LENS is trained on human ratings after zscore normalization, which are real values predominantly lie between [-3, 3]. To make LENS more interpretable, we rescale the predicted scores to the range between [0, 100] as the percentage of area under a normal curve of mean 0 and standard deviation 1 corresponding to zi. We present the rescaled scores for experiments and analyses in this paper. Besides, we use RoBERTa-large as the underlying model of LENS throughout the paper, unless otherwise specified (§5). ## 4 Experiments We benchmark LENS metric against the existing automatic metrics (i) for evaluating text simplification system outputs and (ii) for training better | SIMPEVAL2022 | WIKI-DA | NEWSELA-LIKERT | | | | | | | | |----------------|-----------|------------------|---------|---------|------------|---------|---------|------------|--------| | τpara | τspl | τall | Fluency | Meaning | Simplicity | Fluency | Meaning | Simplicity | | | FKGL | -0.556 | -0.31 | -0.356 | 0.054 | 0.145 | 0.001 | 0.193 | 0.306 | -0.051 | | BLEU | 0.048 | -0.054 | -0.033 | 0.460 | 0.622 | 0.438 | 0.332 | 0.261 | 0.118 | | SARI | 0.206 | 0.140 | 0.149 | 0.335 | 0.534 | 0.366 | 0.234 | 0.124 | 0.094 | | BERTScore | 0.238 | 0.093 | 0.112 | 0.636 | 0.682 | 0.614 | 0.384 | 0.274 | 0.215 | | LENSall | 0.333 | 0.233 | 0.241 | 0.816 | 0.662 | 0.733 | 0.655 | 0.477 | 0.343 | | LENSk=1 | 0.460 | 0.295 | 0.307 | 0.796 | 0.647 | 0.721 | 0.633 | 0.444 | 0.328 | | LENSk=3 | 0.429 | 0.333 | 0.331 | 0.807 | 0.668 | 0.749 | 0.624 | 0.428 | 0.359 | ![4_image_0.png](4_image_0.png) automatic simplification models when used as alternative reward functions. ## 4.1 Correlation With Human Evaluation We demonstrate that LENS correlates better with human judgments than the existing metrics. Evaluation Datasets. We compare LENS to the existing metrics, namely SARI (Xu et al., 2016), BERTScore (Zhang et al., 2020),4 BLEU, and Flesch-Kincaid Grade Level readability (FKGL) (Kincaid, 1975) on three datasets: SIMPE-VAL2022 (§3.1), WIKI-DA released by AlvaManchego et al. (2021) with 0-100 continuous scale ratings on fluency, meaning preservation, and simplicity for 600 simplifications across six systems, and NEWSELA-LIKERT collected by Maddela et al. (2021) with 5-point Likert scale ratings on the same dimensions for 500 simplifications across five systems. While SIMPEVAL2022 and WIKI-DA are derived from Wikipedia, NEWSELALIKERT is derived from news articles in Newsela (Xu et al., 2015), a widely used corpus for text 4We use precision score of BERTScore as it has been shown to perform better for simplification evaluation (AlvaManchego et al., 2021). simplification. When calculating metric scores for SIMPEVAL2022 that contains two human simplifications, we use one as the reference and the other as the oracle simplification system. We remove the complex sentences in WIKI-DA that overlap with SIMPEVALPAST, the training dataset for LENS. Evaluation Setup. We report Kendall Tau-like correlation for SIMPEVAL2022 to capture the ability of metrics to distinguish two systems, which is close to the real-world scenario. Kendall Taulike correlation is predominantly used in machine translation metric evaluation at WMT (Bojar et al., 2017; Ma et al., 2018) for the same reason as it focuses on the relative ranking of the outputs. Given an input c and its simplifications from N systems S = {s1, . . . , sm, . . . , sn*, . . . , s*N }, we extract (sm, sn) pairs, where 1 ≤ *m < n* ≤ N, and calculate Kendall Tau-like coefficient τ : $ \tau=\frac{|Concordant|-|Discordant|}{|Concordant|+|Discordant|}$ (2) . where *Concordant* is the set of pairs where the metric ranked (sm, sn) in the same order as humans ranked and *Discordant* is the set of the pairs where the metric and humans disagreed. We report separate coefficients for paraphrase- (τ**para**), and split-focused (τspl) simplifications, along with (τall) for all of them. Following the literature, we only used the (sm, sn) pairs for which the difference in ratings is more than 5 out of the 100 points, and all the annotators agreed on the ranking order. To make results comparable to existing work (Alva-Manchego et al., 2021, 2020) that evaluated on WIKI-DA and NEWSELA-LIKERT, we report Pearson correlation (ρ) between the metric scores and the human ratings. Results. Table 2 shows that LENS outperforms the existing metrics on SIMPEVAL2022 and WIKIDA belonging to the Wikipedia domain, and NEWSELA-LIKERT based on newswire domain. The difference is more substantial on SIMPE-VAL2022 consisting of similar performing SOTA systems, where the τall of LENSk=3 exceeds the τall of BERTScore by 0.22 points. Training using top k references (LENSk=3) has improved τall on SIMPEVAL2022 and ρ along the simplicity dimension on the rest when compared to using all the references (LENSall). Figure 3 shows the 95% confidence intervals for τall on SIMPEVAL2022 and ρ for simplicity, which is deemed to be the most important dimension in prior work (Alva-Manchego et al., 2021; Xu et al., 2016), on WIKI-DA and NEWSELA-LIKERT calculated using bootstrapping methods by Deutsch et al. (2021). A smaller interval indicates that the metric exhibits lower variance and higher reliability. LENS exhibits smaller intervals than the other metrics. ## 4.2 Lens **As Training Or Decoding Objectives** We also incorporate LENS into training as an alternative reward function in minimum risk training framework (Kumar and Byrne, 2004; Smith and Eisner, 2006) and into decoding as a reranking objective in minimum Bayes risk decoding (Fernandes et al., 2022; Freitag et al., 2022) to improve the quality of generated simplifications. Minimum Risk Training (MRT). Given the input c and reference r, we generate a set of candidates S for each training step and calculate the expected risk (L*risk*) as follows: $$L_{risk}=\sum_{s\in S}cost(c,r,s)\frac{P(s|c)}{\sum_{s^{\prime}\in S}P(s^{\prime}|c)}\tag{3}$$ $$P(s|c)=\prod_{t=1}^{T}P(s_{t}|c,s_{<t};\theta)$$ $$cost(c,r,s)=1-Metric(c,r,s)$$ | BL | SARI | BS | LENS | sBL ↓ | Human | | |----------------------------------------------------------|--------|-------|--------|---------|---------|-------| | MLE | 44.4 | 47.5 | 94.7 | 61.0 | 70.7 | 68.71 | | Minimum Risk Training (MRT) BL 45.1 47.3 94.7 60.5 | 71.3 | 68.33 | | | | | | SARI | 43.4 | 48.5 | 94.5 | 63.2 | 68.1 | 70.86 | | BS | 44.0 | 48.4 | 94.6 | 62.1 | 67.8 | 70.32 | | LENS | 42.6 | 47.4 | 94.5 | 63.9 | 67.1 | 68.89 | | Minimum Bayes Risk Decoding (MBR) BL 44.8 48.3 94.4 60.4 | 74.1 | 66.19 | | | | | | SARI | 43.9 | 49.6 | 94.5 | 59.3 | 75.5 | 64.31 | | BS | 44.3 | 48.3 | 94.6 | 61.1 | 73.1 | 66.79 | | LENS | 43.2 | 49.5 | 94.7 | 73.0 | 61.8 | 72.80 | Table 3: Evaluation results for minimum risk training (MRT) and minimum Bayes risk decoding (MBR) using T5-base model on SIMPEVAL2022. We report LENS, BLEU (BL), **SARI**, BERTScore (BS), selfBLEU (sBL), and average human ratings. MBR with LENS shows the best human evaluation results despite making the most number of edits to the input as indicated by its lowest self-BLEU. We use beam search with beam size of 10 for MLE. where Metric(*c, r, s*) can be any evaluation metric, θ are the model parameters, and s<t = s1*, . . . s*t−1 is a partial generation of s. Following previous work (Shen et al., 2016; Wieting et al., 2019), we first train the seq-to-seq generation model with maximum likelihood estimation (MLE) for a few epochs and then we change the loss to a combination of LMLE and expected risk: $$L_{MRT}=\gamma L_{MLE}+(1-\gamma)L_{risk}.\tag{4}$$ We choose $\gamma=0.3$ and $|S|=10$ for our experi We choose γ = 0.3 and |S| = 10 for our experiments. ## Minimum Bayes Risk Decoding (Mbr). We adopt the MBR framework proposed by Fernandes et al. (2022), where we first generate a set of candidates S for input c during inference then rerank them by comparing each candidate s ∈ S to all the other candidates in the set: $${\hat{u}}_{M B R}=\operatorname*{arg\,max}_{s\in S}{\frac{1}{|S|}}\sum_{s^{\prime}\in S}M e t r i c(c,s^{\prime},s).\,\,\,\,(5)$$ For our experiments, we generate the candidates using beam search with beam size = |S|. Experiment Setup. We fine-tune a T5 model that prepends control tokens to the input (Sheang and Saggion, 2021) to control various aspects of the generated simplification and has shown state-ofthe-art performance for the task. We use WIKIAUTO (Jiang et al., 2020) for training, ASSET | BL | SARI | BS | LENS | sBL ↓ | |----------------------|--------|-------------------|------------------|---------| | T5-base MLEb=10 | 44.4 | 47.5 | 94.7 61.0 | 70.7 | | MLEb=100 | 42.6 | 46.2 | 94.3 57.6 (-3.4) | 68.1 | | MBR-LENS|S|=10 | 43.6 | 48.7 | 94.6 67.8 (+6.8) | 67.0 | | MBR-LENS|S|=100 43.2 | 49.5 | 94.7 73.0 (+12.1) | 61.8 | | | T5-3B MLEb=10 | 42.3 | 46.3 | 94.8 60.3 | 61.9 | | MLEb=100 | 39.7 | 42.5 | 94.0 53.4 (-6.9) | 65.4 | | MBR-LENS|S|=10 | 41.3 | 46.1 | 94.7 66.9 (+6.6) | 58.8 | | MBR-LENS|S|=100 42.3 | 47.7 | 94.9 72.6 (+12.3) | 55.7 | | | T5-11B MLEb=10 | 37.6 | 46.4 | 93.8 62.9 | 49.3 | | MLEb=100 | 35.8 | 44.7 | 93.2 59.2 (-3.7) | 51.1 | | MBR-LENS|S|=10 | 36.9 | 46.5 | 93.9 69.9 (+7.0) | 48.4 | | MBR-LENS|S|=100 36.2 | 46.1 | 93.8 74.4 (+11.5) | 44.6 | | Table 4: Automatic evaluation results for minimum Bayes risk decoding (MBR) with different model sizes on SIMPEVAL2022. For standard MLE, we use beam search with beam sizes of 10 and 100. | BL SARI | BS | LENS sBL↓ Human | | | | | |-----------------------------------------|------|-------------------|------|------|-------|-------| | T5-11B MLEb=10 | 37.6 | 46.4 | 93.8 | 62.9 | 49.3 | 88.80 | | MBR-LENS|S|=100 36.2 | 46.1 | 93.8 | 74.4 | 44.6 | 90.13 | | | Close-source LLMs GPT-3.5 (0-shot) 27.8 | 41.4 | 93.4 | 60.7 | 31.8 | 90.77 | | | GPT-3.5 (5-shot) | 30.5 | 42.4 | 94.1 | 69.0 | 33.2 | 92.70 | | GPT-4 (0-shot) | 31.6 | 43.7 | 94.3 | 73.5 | 29.1 | 93.63 | Table 5: Human evaluation results for the T5-11B and close-sourced LLMs on SIMPEVAL2022. T5-11B with MBR-LENS decoding achieves the state-of-the-art opensource model performance, on par with GPT-3.5. for validation, and the challenging, complex sentences in SIMPEVAL2022 for testing. WIKIAUTO consists of 400k complex-simple sentence pairs extracted from Normal and Simple Wikipedia document pairs. For our experiments, we trained T5 towards LENS, BLEU, SARI, and BERTScore. We provide more implementation details in Appendix B. We report the same metrics on the test set along with self-BLEU to capture the diversity of the outputs. We also ask 3 annotators to rate the system outputs from SIMPEVAL2022 using our evaluation interface (§6) and report the averaged ratings. Results. Table 3 shows that LENS integrated into minimal Bayes risk decoding (MBR-LENS) achieves the best human evaluation results while it makes the most number of edits to the input (less copying) as indicated by its lowest self-BLEU score. Although MRT-LENS has slightly lower average human ratings than MRT-BERTScore and MRT-SARI, the pairwise comparison shows that | #Param | τpara | τspl | τall | | |---------------|---------|--------|--------|-------| | MiniLM | 66M | 0.143 | 0.310 | 0.277 | | BERT-base | 110M | 0.461 | 0.240 | 0.258 | | BERT-large | 340M | 0.461 | 0.279 | 0.298 | | RoBERTa-base | 110M | 0.472 | 0.271 | 0.295 | | RoBERTa-large | 340M | 0.429 | 0.333 | 0.331 | they are very comparable: MRT-LENS generates 28% better, 28% worse, and 44% equal quality simplifications compared to MRT-SARI. Additionally, BERTScore and SARI show a decrease in quality when used in decoding than standard maximum likelihood estimation (MLE). It is noteworthy that we use a beam size of 10 for MLE rather than a large search space of beam size 100 because generation quality degrades with increased beam size as shown in Table 4 as well as in existing literature (Stahlberg and Byrne, 2019; Meister et al., 2020). Given the success of using LENS as utility function of MBR decoding on T5-base, we further apply it to larger models, including T5-3B and T5-11B. As displayed in Table 4, MBR-LENS with 100 candidates improves over standard beam search by an increase of over 11 points of LENS score across all model sizes. Although MBR may inflate the results of the utility function it uses (Fernandes et al., 2022), our human evaluations solidify the assertion that T5-11B with MBR-LENS decoding exceeds standard beam search, thereby establishing state-of-the-art (SOTA) performance among opensource models. When compared to close-source large language models, T5-11B with MBR-LENS achieves on-par performance with GPT-3.5. ![7_image_0.png](7_image_0.png) ## 5 Lens Analysis In this section, we delve into the impact of the underlying model architecture and the number of references on the performance of LENS. Model Architecture. Table 6 shows the Kendall Tau-like correlation (τpara, τspl, and τall) of LENS metric trained on various encoders on SIM-PEVAL2022. LENS metrics trained on RoBERTa encoders (Liu et al., 2019) perform better than their respective LENS metrics trained on the BERT encoders (Devlin et al., 2019). Among all models, LENS trained on RoBERTa-large achieves the highest overall correlation. It has a substantial improvement in τspl with a trade-off in τ*para*, in comparison to RoBERTa-base. Number of References. Figure 4 shows the Pearson correlation of LENS metric on WIKI-DA for a varied number of references used during inference. Although the metric performs the best with 10 references, we see only a slight drop with one reference, demonstrating that LENS is capable of evaluating with single or multiple references. ## 6 Rank & Rate **Framework** To facilitate consistent evaluation and enable the training of LENS, we develop RANK & RATE , a human evaluation framework to assist annotators in efficiently comparing and rating many (>20) system outputs at once in a list-wise ranking manner. ## 6.1 Methodology We describe the three-step annotation methodology for RANK & RATE (Figure 5) as follows: Step 1 - Categorizing System Outputs. As there are different acceptable ways to simplify the input text, we first display the system outputs in groups as split-, deletion-, or paraphrase-focused based on the following criteria: (i) outputs with multiple sentences are split-focused, (ii) outputs with a compression ratio less than 0.5 or generated by deleting words from the original sentence are deletion-focused, (iii) the rest are paraphrasefocused. Annotators can adjust the initial automatic categorization during the annotation process. Step 2 - Highlighting Edits. To help annotators notice the changes made by each system and subsequently improve the accuracy of their ratings, we use a state-of-the-art word alignment model by Lan et al. (2021) to highlight the edits performed by the system and classify them into three types: (i) Deletion-edits are phrases or words with no match in the modified output, (ii) Paraphrase-edits are new phrases or words added or modified, and (iii) Splitedits are any added periods ("."). Deletion-edits are marked with a red caret ("∧"), paraphrase-edits with bolded text, and split-edits with two vertical bars (" || ") (see Figure 5). We ask annotators to correct the misclassified edits using an interactive pop-up window (see Appendix F). Step 3 - Ranking and Rating System Outputs. Following the machine translation evaluation at WMT (Ma et al., 2018, 2019; Barrault et al., 2020; Akhbardeh et al., 2021), we ask annotators to rate the quality of system outputs on a 0-100 continuous scale instead of the 5-point Likert scale because the former was shown to provide higher levels of interannotator consistency (Novikova et al., 2018) and | SIMPLIKERT2022 | SIMPDA2022 | SIMPEVAL2022 | | | | | | | | |---------------------|--------------|----------------|------|---------|----------|------------|-------|---------|-------| | Fluency | Adequacy | Simplicity | Avg. | Fluency | Adequacy | Simplicity | Avg. | Overall | | | Human-1 | 4.69 | 4.46 | 1.36 | 3.50 | 92.14 | 88.87 | 83.91 | 88.30 | 81.57 | | Human-2 | 4.72 | 4.48 | 1.39 | 3.53 | 92.84 | 88.23 | 84.50 | 88.53 | 81.87 | | GPT-3.5 (5-shot) | 4.64 | 4.66 | 1.13 | 3.48 | 92.59 | 92.10 | 81.17 | 88.62 | 79.59 | | GPT-3.5 (zero-shot) | 4.63 | 4.56 | 0.91 | 3.37 | 90.51 | 90.32 | 74.81 | 85.21 | 75.27 | | MUSS | 4.40 | 4.11 | 0.86 | 3.12 | 87.99 | 80.57 | 73.12 | 80.56 | 70.74 | | T5-3B | 4.70 | 4.33 | 0.55 | 3.19 | 93.96 | 85.22 | 58.44 | 79.21 | 64.98 | is more suitable to apply on many statistical models (Graham et al., 2013). Our interactive interface allows the annotators to move the system outputs up and down to rank them and compare similar quality outputs more easily by placing them together. We provide the annotation instructions and screenshots of the interface in Appendix F. ## 6.2 Human Evaluation Comparison We compare RANK & RATE with the existing human evaluation methods: 5-point Likert (Sulem et al., 2018) and Direct Assessment with a continuous scale of 0 to 100 (Alva-Manchego et al., 2021), which were both conducted on three dimensions: fluency, adequacy, and simplicity. For a fair comparison, we annotate the same set of simplifications using each method, resulting in SIMPLIK-ERT2022 and SIMPDA2022. Table 7 shows similar rankings of the systems by the three methods with very slight differences. We also calculate the interval Krippendorff's α (Krippendorff, 2011) for interannotator agreement. RANK & RATE achieves an α of 0.32, which is higher than the 0.23 α by Likert and 0.25 α by DA. All values are considered fair (Krippendorff, 2004). ## 7 Other Related Work Text-to-text Generation Metrics. There is currently no single automatic metric to evaluate all the text-to-text generation tasks that revise text. SARI (Xu et al., 2016) is the main metric for text simplification and has been used by other generation tasks such as sentence splitting and fusion (Rothe et al., 2020; Kim et al., 2021), decontextualization (Choi et al., 2021), and scientific rewriting (Du et al., 2022). Style transfer tasks (Hu et al., 2017; Rao and Tetreault, 2018; Prabhumoye et al., 2018; Krishna et al., 2020; Xu et al., 2012; Ma et al., 2020) use different automatic metrics to measure each aspect of text: (i) text similarity metrics such as BLEU, METEOR (Lavie and Agarwal, 2007), and BERTScore to measure content preservation, (ii) text classification models (Sennrich et al., 2016; Luo et al., 2019; Krishna et al., 2022b) or embedding-based edit distance metrics such as MoverScore (Zhao et al., 2019; Mir et al., 2019) to evaluate target style, and (iii) perplexity or pseudo log likelihood (Salazar et al., 2020) to measure fluency. Incorporating Evaluation Metrics into Training and Inference. Prior studies have improved MLE-trained generation systems with alternative training approaches that integrate evaluation metrics based on reinforcement learning (Ranzato et al., 2015; Li et al., 2016; Gong et al., 2019), minimum risk (Smith and Eisner, 2006; Shen et al., 2016), and ranking (Hopkins and May, 2011; Xu et al., 2016; Krishna et al., 2022a). Incorporating metrics into decoding has also been explored by reranking the generations using discriminative rankers (Shen et al., 2004; Lee et al., 2021), energy-based rankers (Bhattacharyya et al., 2021), and minimum risk (Kumar and Byrne, 2004; Freitag et al., 2022). We use minimum risk as it has been shown to help machine translation systems (Wieting et al., 2019; Fernandes et al., 2022; Amrhein and Sennrich, 2022). ## 8 Conclusion We introduce LENS, the first supervised automatic metric for text simplification. We show that LENS exhibits higher human correlation than other automatic metrics. We also introduce RANK & RATE framework, which allows annotators to evaluate multiple systems' outputs at once in a list-wise manner. Using it, we create SIMPEVALPAST to train LENS, and SIMPEVAL2022 as a new metric evaluation benchmark for text simplification. We hope our metric, data, and framework will facilitate future research in text simplification evaluation. ## Limitations In this paper, we show that LENS shows better human correlation than other metrics on Wikipedia and news domains. Future research can further experiment and extend LENS to other domains, such as medical and children's books, as the preference for different simplification operations can vary depending on the domain and user. Additionally, our work focuses on sentence-level simplification, and future work can extend LENS to evaluating paragraph- and document-level simplification. SIMPEVAL dataset and LENS are also limited to the English language. ## Ethics Statement We used in-house annotators to collect human ratings in SIMPEVAL datasets and write simplifications in SIMPEVAL2022. The annotators are university-level undergraduate and graduate students, including both native and non-native speakers of English. We did not collect any personal information from the annotators. We paid each annotator $15 per hour, which is above the US federal minimum wage. We ensured that the content shown to the annotators was not upsetting and let them know that they could skip the task if they felt uncomfortable at any point. We also let the annotators know the purpose of the collected data. The original complex sentences in the SIMPE-VAL datasets are from the publicly available Wikipedia. The simplifications are either from the existing work or human simplifications collected from our annotators. We used the authorreleased simplification outputs if they are available. For T5 (base, large, and 3B) systems, we trained our own simplification models using open-sourced code from the Hugging Face Transformers5library. ## Acknowledgments We thank Nghia T. Le, Tarek Naous, Yang Chen, and Chao Jiang as well as three anonymous reviewers for their helpful feedback on this work. We also thank Marcus Ma, Rachel Choi, Vishnesh J. Ramanathan, Elizabeth Liu, Alex Soong, Govind Ramesh, Ayush Panda, Anton Lavrouk, Vinayak Athavale, and Kelly Smith for their help with human evaluation. This research is supported in part by the NSF awards IIS-2144493 and IIS-2112633, ODNI and IARPA via the BETTER program (contract 2019-19051600004) and the HIATUS program (contract 2022-22072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. ## References Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88, Online. Association for Computational Linguistics. Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020. ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations. In *Proceedings of the* Association for Computational Linguistics. Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2021. The (un)suitability of automatic evaluation metrics for text simplification. *Computational* Linguistics, 47(4):861–889. Chantal Amrhein and Rico Sennrich. 2022. Identifying weaknesses in machine translation metrics through minimum Bayes risk decoding: A case study for COMET. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. *ArXiv*, abs/1710.11041. Loïc Barrault, Magdalena Biesialska, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, 5https://huggingface.co/ Philipp Koehn, Chi-kiu Lo, Nikola Ljubešic, Christof ´ Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1–55, Online. Association for Computational Linguistics. Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, and Andrew McCallum. 2021. Energy-based reranking: Improving neural machine translation using energybased models. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4528–4537, Online. Association for Computational Linguistics. Ondˇrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the WMT17 metrics shared task. In *Proceedings of the Second Conference on Machine Translation*, pages 489–513, Copenhagen, Denmark. Association for Computational Linguistics. Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. Transactions of the Association for Computational Linguistics (TACL). Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Daniel Deutsch, Rotem Dror, and Dan Roth. 2021. A statistical analysis of summarization evaluation metrics using resampling methods. Transactions of the Association for Computational Linguistics, 9:1132– 1146. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402, Florence, Italy. Association for Computational Linguistics. Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, and Yejin Choi. 2022. Is GPT-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7250–7274, Dublin, Ireland. Association for Computational Linguistics. Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, and Dongyeop Kang. 2022. Understanding iterative revision from human-written text. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 3573–3590, Dublin, Ireland. Association for Computational Linguistics. Patrick Fernandes, António Farinhas, Ricardo Rei, José De Souza, Perez Ogayo, Graham Neubig, and Andre Martins. 2022. Quality-aware decoding for neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1396–1412, Seattle, United States. Association for Computational Linguistics. Markus Freitag, David Grangier, Qijun Tan, and Bowen Liang. 2022. High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics. *Transactions of the Association for Computational Linguistics*, 10:811–825. Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training corpus. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3168– 3180, Minneapolis, Minnesota. Association for Computational Linguistics. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of GPT-3. *arXiv preprint arXiv:2209.12356*. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33–41, Sofia, Bulgaria. Association for Computational Linguistics. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1352–1362, Edinburgh, Scotland, UK. Association for Computational Linguistics. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning. Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 7943–7960, Online. Association for Computational Linguistics. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1537–1546, Sofia, Bulgaria. Association for Computational Linguistics. Joongwon Kim, Mounica Maddela, Reno Kriz, Wei Xu, and Chris Callison-Burch. 2021. BiSECT: Learning to split and rephrase sentences with bitexts. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6193– 6209, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Kincaid. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. research branch report. Klaus Krippendorff. 2004. Reliability in content analysis: Some common misconceptions and recommendations. *Human communication research*, 30(3):411– 433. Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability. Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022a. Rankgen: Improving text generation with large ranking models. Kalpesh Krishna, Deepak Nathani, Xavier Garcia, Bidisha Samanta, and Partha Talukdar. 2022b. Fewshot controllable style transfer for low-resource multilingual settings. In *Proceedings of the 60th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7439–7468, Dublin, Ireland. Association for Computational Linguistics. Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 737–762, Online. Association for Computational Linguistics. Dhruv Kumar, Lili Mou, Lukasz Golab, and Vechtomova Olga. 2020. Iterative edit-based unsupervised sentence simplification. In *Proceedings of the Association for Computational Linguistics*. Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In *Proceedings of the Human Language Technology Conference of the North American Chapter* of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169–176, Boston, Massachusetts, USA. Association for Computational Linguistics. Wuwei Lan, Chao Jiang, and Wei Xu. 2021. Neural semi-markov CRF for monolingual word alignment. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In *Proceedings of the Second Workshop on Statistical Machine* Translation, pages 228–231, Prague, Czech Republic. Association for Computational Linguistics. Ann Lee, Michael Auli, and Marc'Aurelio Ranzato. 2021. Discriminative reranking for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7250–7264, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In *Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing*, pages 1192– 1202, Austin, Texas. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *ArXiv*, abs/1907.11692. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. *arXiv preprint* arXiv:1711.05101. Fuli Luo, Peng Li, Pengcheng Yang, Jie Zhou, Yutong Tan, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. Towards fine-grained text sentiment transfer. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 2013– 2022, Florence, Italy. Association for Computational Linguistics. Qingsong Ma, Ondˇrej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671–688, Belgium, Brussels. Association for Computational Linguistics. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In *Proceedings of the* Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90, Florence, Italy. Association for Computational Linguistics. Xinyao Ma, Maarten Sap, Hannah Rashkin, and Yejin Choi. 2020. PowerTransformer: Unsupervised controllable revision for biased language correction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7426–7441, Online. Association for Computational Linguistics. Mounica Maddela, Fernando Alva-Manchego, and Wei Xu. 2021. Controllable text simplification with explicit paraphrasing. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3536–3553, Online. Association for Computational Linguistics. Louis Martin, Éric de la Clergerie, Benoît Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In *Proceedings of the Twelfth Language* Resources and Evaluation Conference, pages 4689– 4698, Marseille, France. European Language Resources Association. Louis Martin, Angela Fan, Éric de la Clergerie, Antoine Bordes, and Benoît Sagot. 2022. MUSS: Multilingual unsupervised sentence simplification by mining paraphrases. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 1651–1664, Marseille, France. European Language Resources Association. Clara Meister, Ryan Cotterell, and Tim Vieira. 2020. If beam search is the answer, what was the question? In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 2173–2185, Online. Association for Computational Linguistics. Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 495–504, Minneapolis, Minnesota. Association for Computational Linguistics. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In *Proceedings of the 52nd Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 435–445, Baltimore, Maryland. Association for Computational Linguistics. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 72–78, New Orleans, Louisiana. Association for Computational Linguistics. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Richard Yuanzhe Pang and Kevin Gimpel. 2019. Unsupervised evaluation metrics and learning criteria for non-parallel textual transfer. In *Proceedings of* the 3rd Workshop on Neural Generation and Translation, pages 138–147, Hong Kong. Association for Computational Linguistics. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–876, Melbourne, Australia. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140, New Orleans, Louisiana. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. *Transactions of the Association for Computational Linguistics*, 8:264–280. Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser. 2018. RankME: Reliable human ratings for natural language generation. In *Proceedings of the 2018* Horacio Saggion. 2017. Automatic text simplification. Synthesis Lectures on Human Language Technologies. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In *Proceedings of the 2016* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35–40, San Diego, California. Association for Computational Linguistics. Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified textto-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341–352, Aberdeen, Scotland, UK. Association for Computational Linguistics. Libin Shen, Anoop Sarkar, and Franz Josef Och. 2004. Discriminative reranking for machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 177–184, Boston, Massachusetts, USA. Association for Computational Linguistics. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683–1692, Berlin, Germany. Association for Computational Linguistics. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In *Proceedings of the COLING/ACL 2006 Main Conference* Poster Sessions, pages 787–794, Sydney, Australia. Association for Computational Linguistics. Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3356– 3362, Hong Kong, China. Association for Computational Linguistics. Elior Sulem. 2018. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Simple and effective text simplification using semantic and neural methods. In *Proceedings of the 56th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 162–173, Melbourne, Australia. Association for Computational Linguistics. Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised neural text simplification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2058–2068, Florence, Italy. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems. John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond BLEU:training neural machine translation with semantic similarity. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4344– 4355, Florence, Italy. Association for Computational Linguistics. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015– 1024, Jeju Island, Korea. Association for Computational Linguistics. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. *Transactions of the Association for Computational Linguistics (TACL)*. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In *Proceedings of COLING 2012*, pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In *International Conference on Learning Representations*. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584– 594, Copenhagen, Denmark. Association for Computational Linguistics. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353–1361, Beijing, China. Coling 2010 Organizing Committee. ## A Lens **Metric - Implementation Details** We leverage *Transformers* and *PyTorch Lightning* libraries to implement the metric. We used RoBERTa*Large* model (Liu et al., 2019) as the encoder. We fine-tuned the metric with the styleadaptive loss for 20 epochs and selected the checkpoint with the best validation loss. We used the ten human references from ASSET to calculate metric scores while training LENS using SIMPEVALPAST. We used a batch size of 8, Adam optimizer with a learning rate of 3e-05 for the encoder and 1e-05 for other layers. We use dropout of 0.15 and freeze the encoding layers for 1 epoch. We train the model on one Quadro RTX 6000 GPU with 25GB RAM, which takes around 3 hours. ## B Incorporating Evaluation Metrics Into Training And Inference - Implementation Details We implement the controllable sentence simplification system proposed by Sheang and Saggion (2021), a T5-base model. The input sentence is prepended with four control tokens, namely character length ratio, dependency tree depth ratio, character-level Levenshtein similarity, and inverse frequency ratio, to control various aspects of the generated simplification. During training, each control value is calculated using the corresponding training pair and discretized into bins of width 0.05. During inference, we set the control tokens to the average value of the training set. We used 0.9 for the length ratio and 0.75 for the rest. We also finetune the controllable T5-3B and T5-11B versions using the same approach. We use the Hugging Face Transformers library6 for implementing the base model and the MRT framework. We first train the model for 5 epochs using MLE loss and then fine-tune it for 5 epochs using MRT. We use Adam optimizer with a learning rate of 1e-4, linear learning rate warmup of 4k steps, weight decay of 0.1, epsilon of 1e-8, and batch size of 16. During MRT, we use a beam size of 8 to generate candidates. The rest of the parameters are left with default values from the library. During inference, we use a beam size of 10. Our models are trained on 2 A40 GPUs with 45GB RAM for 48 hours. We used the code released by Fernandes et al. (2022) 7for the MBR framework. We generated 6https://huggingface.co/ 7https://github.com/deep-spin/qaware-decode candidates using beam search of size 100 and selected the top 100 candidates for reranking. We used the above T5 model fine-tuned using MLE to generate the candidates. The rest of the parameters are left with default values from the library. ## C Systems In Simpeval **Datasets** Data-driven Neural Models. We use ten supervised models: (i) three fine-tuned T5 (Raffel et al., 2020) models of various sizes, namely T5-base, T5-large, and T5-3B, (ii) a controllable T5-base model that prepends tokens to the input to control the lexical and syntactic complexity of the output (Sheang and Saggion, 2021), (iii) two BART (Lewis et al., 2020) models that also use control tokens where one is trained on Wikipedia (Martin et al., 2020), and the other is fine-tuned on a combination of Wikipedia and web-mined paraphrases (Martin et al., 2022), (iv) a BERT-initialized Transformer (Maddela et al., 2021), (v) a BiLSTM editbased approach that first generates the edit operations, and then the simplification (Dong et al., 2019), (vi) a BiLSTM that directly generates simplifications using reinforcement learning (Zhang and Lapata, 2017), and (vii) a vanilla BiLSTM model. In addition, we also include one unsupervised model and one semi-supervised model by Surya et al. (2019) that uses an auto-encoder with adversarial and denoising losses. Few-shot Methods. We include simplifications generated by GPT-3.58 under zero-shot and 5-shot settings (prompts are provided in Appendix D.2). Data-driven Statistical Methods. We incorporate two systems that applied statistical machine translation (MT) approaches to text simplification: (i) a phrase-based MT system that reranks the outputs based on their dissimilarity with the input (Wubben et al., 2012), and (ii) a syntactic MT system that uses paraphrase rules for lexical simplification (Xu et al., 2016). Rule-based Methods. Kumar et al. (2020) iteratively generates candidates using rules and ranks them with a linguistic scoring function. Hybrid Methods. We utilize three hybrid systems that combine linguistic rules with data-driven methods: (i) Narayan and Gardent (2014) uses semantic structure to predict sentence splitting and 8Specifically, we use the text-davinci-003 model which is the most recent variant with 175B parameters. paraphrases with a phrase-based MT system, (ii) Sulem et al. (2018) performs splitting and deletion using linguistic rules and paraphrasing using a BiLSTM, and (iii) Maddela et al. (2021) generates candidates with different amounts of splitting and deletion and then paraphrases the best candidate with a BERT-initialized Transformer. Naive Baseline Methods. Existing metrics are biased towards conservative systems because their outputs are generally fluent and exhibit high lexical overlap with the input/reference (Pang and Gimpel, 2019; Krishna et al., 2020). We add two conservative systems that are challenging for automatic metrics: (i) a system that always copies the input and (ii) a content-preserving but nonsensical system that scrambles 5% of the input words. Humans. We also add human-written simplifications using different instructions from ASSET (Alva-Manchego et al., 2020) and TurkCorpus (Xu et al., 2016), two widely used evaluation benchmarks for sentence simplification, and an autoaligned one from the SimpleWiki (Kauchak, 2013). ## D Implementation Details For Simplification Systems D.1 T5 Setup We use the Hugging Face Transformers library9. We fine-tune T5-base, T5-large, and T5-3B on 4 A40 GPUs of a total batch size of 64 for 20 epochs (10.4K steps), 16 epochs, and 8 epochs, respectively. We use a learning rate of 3e-4. We save checkpoints every 5K steps and select the best one by performing a manual inspection on a set of 60 simplifications from the development set, resulting in 80K, 60K, and 25K steps for T5-base, T5large, and T5-3B respectively, which are in a similar range to FLAN-T5 (Chung et al., 2022). We use AdamW (Loshchilov and Hutter, 2017) as the optimizer. ## D.2 Gpt-3.5 Setup Hyperparameters. We use the text-davinci-003 GPT-3.5 model from OpenAI API10. To generate simplification, we use the following hyperparameters: temperature=1 and top-p=0.95. Prompts. We use the instruction from ASSET (Alva-Manchego et al., 2021) to prompt GPT-3.5. 9https://huggingface.co/ 10https://beta.openai.com/ Zero-shot setting: *Please rewrite the following* complex sentence in order to make it easier to understand by non-native speakers of English. You can do so by replacing complex words with simpler synonyms (i.e. paraphrasing), deleting unimportant information (i.e. compression), and/or splitting a long complex sentence into several simpler ones. The final simplified sentence needs to be grammatical, fluent, and retain the main ideas of its original counterpart without altering its meaning. ## Input: {Input} Output: 5-shot setting: Please rewrite the following complex sentence in order to make it easier to understand by non-native speakers of English. You can do so by replacing complex words with simpler synonyms (i.e. paraphrasing), deleting unimportant information (i.e. compression), and/or splitting a long complex sentence into several simpler ones. The final simplified sentence needs to be grammatical, fluent, and retain the main ideas of its original counterpart without altering its meaning. ## Examples: Input: {random sampled from ASSET} Output: {random sampled human reference of the Input from ASSET} Input: {random sampled from ASSET} Output: {random sampled human reference of the Input from ASSET} Input: {random sampled from ASSET} Output: {random sampled human reference of the Input from ASSET} Input: {random sampled from ASSET} Output: {random sampled human reference of the Input from ASSET} Input: {random sampled from ASSET} Output: {random sampled human reference of the Input from ASSET} Please rewrite the following complex sentence in order to make it easier to understand by non-native speakers of English. You can do so by replacing complex words with simpler synonyms (i.e. paraphrasing), deleting unimportant information (i.e. compression), and/or splitting a long complex sentence into several simpler ones. The final simplified sentence needs to be grammatical, fluent, and retain the main ideas of its original counterpart without altering its meaning. Input: {input} Output: ## E Lens **Metric - Additional Results** | SIMPEVAL2022 | WIKI-DA | NEWSELA-LIKERT | | | | | | | | |-----------------------------|-----------|------------------|---------|---------|------------|---------|---------|------------|--------| | τpara | τspl | τall | Fluency | Meaning | Simplicity | Fluency | Meaning | Simplicity | | | FKGL | -0.556 | -0.31 | -0.356 | 0.054 | 0.145 | 0.001 | 0.193 | 0.306 | -0.051 | | BLEU | 0.048 | -0.054 | -0.033 | 0.460 | 0.622 | 0.438 | 0.332 | 0.261 | 0.118 | | SARI | 0.206 | 0.140 | 0.149 | 0.335 | 0.534 | 0.366 | 0.234 | 0.124 | 0.094 | | BERTScore | 0.238 | 0.093 | 0.112 | 0.636 | 0.682 | 0.614 | 0.384 | 0.274 | 0.215 | | Original LENS LENSall 0.333 | 0.233 | 0.241 | 0.823 | 0.665 | 0.715 | 0.654 | 0.471 | 0.336 | | | LENSk=1 | 0.460 | 0.295 | 0.307 | 0.813 | 0.658 | 0.692 | 0.649 | 0.449 | 0.321 | | LENSk=3 | 0.429 | 0.333 | 0.331 | 0.843 | 0.684 | 0.735 | 0.646 | 0.437 | 0.353 | | Rescaled LENS LENSall 0.333 | 0.233 | 0.241 | 0.816 | 0.662 | 0.733 | 0.655 | 0.477 | 0.343 | | | LENSk=1 | 0.460 | 0.295 | 0.307 | 0.796 | 0.647 | 0.721 | 0.633 | 0.444 | 0.328 | | LENSk=3 | 0.429 | 0.333 | 0.331 | 0.807 | 0.668 | 0.749 | 0.624 | 0.428 | 0.359 | Table 8: Metric evaluation results on SIMPEVAL2022, WIKI-DA (Alva-Manchego et al., 2021), and NEWSELALIKERT (Maddela et al., 2021) human ratings datasets. We include the results for both original and rescaled versions of LENS. τpara, τspl, and τall represent the Kendall Tau-like correlation for paraphrase-focused, split-focused, and all simplifications respectively. We report the Pearson correlation coefficients along three dimensions for WIKI-DA and NEWSELA-LIKERT. The best values are marked in **bold** and the second best values are underlined. As Kendall Tau measures pairwise rankings, we see the same results for the original and rescaled versions of LENS. ## Annotation Interface F F.1 Step - 1: System Output Categorization. Usually portrayed as being bald, with long whiskers, he is said to be an incarnation of the Southern Polestar. Deletion-focused Simplifications Usually is a very bald, and is said to be an org@1 of the southern usually. , he is said to be an incarnation Portrayed being Usually portrayed as being bald, with long whiskers, he is said to be an incarnation of the Southern Polestar. ![19_image_0.png](19_image_0.png) ------- l ![19_image_1.png](19_image_1.png) ![19_image_2.png](19_image_2.png) He was usually portrayed as being bald, with long whiskers, he is said to be an incarnation of the southern polestar and the original characters in the series. Often described as being bald, with long whiskers, he is seen to be an incarnation of the south polestar. Usually portrayed as being bald, with long whiskers, he is said to be an incarnation of the Southern Polestar. Splitting-focused Simplifications He is usually bald, with long whiskers. [] he is said to be an incarnation of the southern polestar. It is usually shown as being bald, with long whiskers. || People say it is related to the Southern Polestar. ## Submit Annotations Figure 6: Annotation interface for categorizing system outputs. The outputs can be moved up and down or to other categories. ## F.2 Step - 2: Highlighting System Edits. ![20_image_0.png](20_image_0.png) the bottom left of the modal. ![20_image_1.png](20_image_1.png) ## Step - 3: Rating And Ranking System Outputs. F.3 ![21_image_0.png](21_image_0.png) ![21_image_1.png](21_image_1.png) ![21_image_2.png](21_image_2.png) Figure 9: Sentence ranking and rating interface provided to annotators. The annotator can enter the ratings for each sentence and is able to re-order sentences by clicking and dragging their mouse. ## F.4 Annotation Instructions The Interface Before explaining the task, here's a guide to understanding the look and feel of the interface: Sentence Categorization Our sentences are presented in three forms: - Deletion-focused - The AI attempted to simplify the sentence mainly by deleting words - Paraphrase-focused - A mix between deleting words and re-wording sets of words - Splitting-focused - Converts the sentence to multiple, simplified sentences We already placed sentences in each of these categories, these are to help organize the types of simplifications. Sentence Changes We've created some highlights to help you understand how our AI simplified the sentences: . - A deletion Here's an example sentence → Here's an example sentence → Here's an sentence - text - A paraphrase, or re-wording a part of the sentence Here's an example sentence → Here's an exemplenew sentence → Here's an new sentence - II - A split was made from one sentence to two separate sentences Here's an example sentence → Here's an example.|| This is a new sentence ![22_image_0.png](22_image_0.png) Figure 10: Instructions for the overall task. Rating Sentences ![23_image_0.png](23_image_0.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Limitations section ✓ A2. Did you discuss any potential risks of your work? Limitations section Ethics Statement section ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 ✓ B1. Did you cite the creators of artifacts you used? Section 3, Appendix sections A - D. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Sections 3 Ethics Statement section ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Sections 3 Ethics Statement section ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Ethics Statement section ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 3 Ethics Statement section ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3 ## C ✓ **Did You Run Computational Experiments?** Section 4 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix Sections A - D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3 Appendix Sections A - D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 4 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix Sections A - D D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 3 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Appendix section F ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Sections 3 Ethics Statement section ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Ethics Statement section ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No. This study does not meet the definition of "human subjects" research by our institutional IRB review board, as we did not: "(1) Interact with, intervene with, or obtain/access private, identifiable information or data about, a living individual (includes online surveys) or (2) Conduct research on a drug, biologic, or medical device". The annotations are linguistic judgements provided by hired in-house employees. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Ethics Statement section
hu-etal-2023-meetingbank
{M}eeting{B}ank: A Benchmark Dataset for Meeting Summarization
https://aclanthology.org/2023.acl-long.906
As the number of recorded meetings increases, it becomes increasingly important to utilize summarization technology to create useful summaries of these recordings. However, there is a crucial lack of annotated meeting corpora for developing this technology, as it can be hard to collect meetings, especially when the topics discussed are confidential. Furthermore, meeting summaries written by experienced writers are scarce, making it hard for abstractive summarizers to produce sensible output without a reliable reference. This lack of annotated corpora has hindered the development of meeting summarization technology. In this paper, we present MeetingBank, a new benchmark dataset of city council meetings over the past decade. MeetingBank is unique among other meeting corpora due to its divide-and-conquer approach, which involves dividing professionally written meeting minutes into shorter passages and aligning them with specific segments of the meeting. This breaks down the process of summarizing a lengthy meeting into smaller, more manageable tasks. The dataset provides a new testbed of various meeting summarization systems and also allows the public to gain insight into how council decisions are made. We make the collection, including meeting video links, transcripts, reference summaries, agenda, and other metadata, publicly available to facilitate the development of better meeting summarization techniques.
# Meetingbank**: A Benchmark Dataset For Meeting Summarization** Yebowen Hu,† Tim Ganter,‡ Hanieh Deilamsalehy,‡ **Franck Dernoncourt,**‡ Hassan Foroosh,† **Fei Liu**§ †University of Central Florida ‡Adobe Research §Emory University huye@knights.ucf.edu hassan.foroosh@ucf.edu {ganter,deilamsa,franck.dernoncourt}@adobe.com fei.liu@emory.edu ## Abstract As the number of recorded meetings increases, it becomes increasingly important to utilize summarization technology to create useful summaries of these recordings. However, there is a crucial lack of annotated meeting corpora for developing this technology, as it can be hard to collect meetings, especially when the topics discussed are confidential. Furthermore, meeting summaries written by experienced writers are scarce, making it hard for abstractive summarizers to produce sensible output without a reliable reference. This lack of annotated corpora has hindered the development of meeting summarization technology. In this paper, we present MeetingBank, a new benchmark dataset of city council meetings over the past decade. *MeetingBank* is unique among other meeting corpora due to its divide-and-conquer approach, which involves dividing professionally written meeting minutes into shorter passages and aligning them with specific segments of the meeting. This breaks down the process of summarizing a lengthy meeting into smaller, more manageable tasks. The dataset provides a new testbed of various meeting summarization systems and also allows the public to gain insight into how council decisions are made. We make the collection, including meeting video links, transcripts, reference summaries, agenda, and other metadata, publicly available to facilitate the development of better meeting summarization techniques.1 ## 1 Introduction An astonishing 55 million meetings happen in the U.S. each week (Flynn, 2022). With the extensive use of video conferencing software, e.g., Microsoft Teams, Google Meet and Zoom, it has become easier than ever before to record meetings. While these recordings provide a wealth of human intelligence and actionable knowledge, the temporal nature of sound makes it difficult for users to navigate and 1Our dataset can be accessed at: **meetingbank.github.io** search for specific content (Bengio and Bourlard, 2004). A summarization system that produces text summaries from transcripts can help, by providing users with great flexibility in navigating recordings, including but not limited to: meetings, interviews, podcasts, lectures, movies and TV series (Papalampidi et al., 2020; Zhu et al., 2021; Song et al., 2022; Chen et al., 2022; Cho et al., 2022). Effective meeting summarization requires annotated datasets. Most summarizers, including fewshot and prompt-based (Goyal et al., 2022), will benefit directly from benchmark datasets containing hundreds of thousands of document-summary pairs such as XSum (Narayan et al., 2018), MultiNews (Fabbri et al., 2019), GovReport (Huang et al., 2021), PLoS (Goldsack et al., 2022). However, datasets for meeting summarization are relatively scarce, small, or unrepresentative. ICSI and AMI are two benchmark datasets (Janin et al., 2003; Carletta et al., 2006) that consist of only 75 and 140 meetings, respectively. Other existing datasets for meetings are developed for speech recognition, or are in languages other than English and do not have reference summaries (Tardy et al., 2020; Kratochvil et al., 2020). Creating an annotated meeting dataset poses several challenges. First, meetings often contain confidential or proprietary information, making it difficult to share them publicly. Moreover, accurately annotating meeting summaries is a labor-intensive process, even for experienced writers familiar with the meeting topics (Renals et al., 2007). Effective meeting summaries should capture key issues discussed, decisions reached, and actions to be taken, while excluding irrelevant discussions (Zechner, 2002; Murray et al., 2010). There thus is a growing need for innovative approaches to construct a meeting dataset with minimal human effort to support advanced meeting solutions. An increasing number of *city governments* are releasing their meetings publicly to encourage trans16409 ![1_image_0.png](1_image_0.png) parency and engage residents in their decision making process. In this paper, we present a systematic approach to develop *a city council meeting dataset*. A city council is the legislative branch of local government. The council members are responsible for making decisions on a range of issues that affect the city and its citizens. These decisions may include creating and approving annual budgets, setting tax rates, confirming appointments of officers, enacting and enforcing ordinances, and setting policies on issues such as land use, public safety, and community development. Figure 1 provides an example of a regular meeting of the City Council of Boston held on May 4, 2022. We present *MeetingBank*, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. It is an order of magnitude larger than existing meeting datasets (Carletta et al., 2006). On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. To handle the max sequence length constraint imposed by abstractive summarizers, we introduce a *divide-and-conquer strategy* to divide lengthy meetings into segments, align these segments with their respective summaries from minutes documents, and keep the segments simple for easy assembly of a meeting summary. This yields 6,892 segment-level summarization instances for training and evaluating of performance. Our repository can be further enhanced through community efforts to add annotations such as keyphrases and queries (Zhong et al., 2021). To summarize, this paper presents the following contributions: - We have curated a repository of city council meetings, *MeetingBank*, to advance summarization in an understudied domain. We detail our process of examining 50 major U.S. cities, accessing their city councils' websites for meeting videos and minutes, and obtaining permission to use their data for research purposes. As more cities participate in open government initiatives and release their council meetings, MeetingBank has the potential to continue growing. - We test various summarizers including extractive, abstractive with fine-tuning, and GPT-3 with prompting on this task. They are provided with the transcript of a meeting segment and is tasked with generating a concise summary. Experiments with automatic metrics and expert annotators suggest that meeting summarizers should prioritize capturing the main points of meeting discussions and maintaining accuracy to the original. ## 2 Existing Datasets In this section, we review existing meeting datasets, discuss the techniques used to create reference summaries for them and identify research challenges that require attention in this area. ICSI and AMI are widely used datasets for meetings. ICSI (Janin et al., 2003) is a benchmark of 75 meetings that occurred naturally among speech researchers in a group seminar setting. Each meeting lasts approximately an hour. AMI (Carletta et al., 2006) contains 100 hours of recorded meetings, including 140 scenario-based meetings where groups of four participants assume roles within a fictitious company to design a product. Meetings typically last around 30-40 minutes, with roles including a project manager, user interface designer, production designer, and marketing expert. A wide range of annotations are performed by annotators, including speech transcriptions, dialogue acts, topics, keywords, extractive and abstractive summaries. Although small in size, these datasets offer a valuable testbed for evaluating meeting summarization systems (Wang and Cardie, 2013; Oya et al., 2014; Shang et al., 2018; Li et al., 2019; Koay et al., 2020, 2021; Zhang et al., 2022). Our study complements recent datasets for meetings such as **ELITR** and **QMSum**. Nedoluzhko et al. (2022) developed ELITR, a dataset of 120 English and 59 Czech technical project meetings spanning 180 hours of content. Their minutes are created by participants and specially-trained annotators. Zhong et al. (2021) developed the QMSum system which extracts relevant utterances from transcripts and then uses the utterances as input for an abstractor to generate query-focused summaries. Human annotators are recruited to collect queries and compose summaries. They annotate a limited number of 25 committee meetings from the Welsh Parliament, 11 from the Parliament of Canada, as well as AMI and ICSI meetings for query-focused summarization. Summarization datasets have been developed for genres similar to meetings, such as podcasts, interviews, livestreams and TV series. **Spotify** (Clifton et al., 2020) released a dataset of 100,000 podcasts to support podcast search and summarization. The dataset includes automatic transcripts with word- ![2_image_0.png](2_image_0.png) level time alignments and creator-provided podcast descriptions are used as reference summaries. **MediaSum** (Zhu et al., 2021) is a dataset of media interviews containing 463.6k transcripts from NPR and CNN, with overview and topic descriptions used as reference summaries. **StreamHover** (Cho et al., 2021) used crowd workers to annotate 370 livestream videos for both extractive and abstractive summaries. **SummScreen** (Chen et al., 2022) consists of TV series transcripts and human-written recaps extracted from community websites. Unlike meetings, these datasets have a smaller number of participants, usually one or two, and do not involve decision making. Meetings can also occur through online chats, as opposed to face-to-face. **SAMSum** (Gliwa et al., 2019) is a dataset of 16k chat dialogs with manually annotated abstractive summaries. The conversations are short, ranging from 3 to 30 utterances. **ForumSum** (Khalman et al., 2021) is another dataset that includes online posts collected from inter- | MEETING-LEVEL | INST. | SOURCE | SUMMARY | | | | | | | | |-----------------------------------------------------------------------------------------------|---------|----------|-----------|-----------|---------|-------|-------|-------|------------|-----| | CITY | #Mtgs | #Hrs | #Tks | #Speakers | Period | #Segs | #Snts | #Tks | #Snts #Tks | | | Denver | 401 | 979 | 25,460 | [3,20] | 2014–22 | 1,506 | 204 | 5,100 | 3.32 | 111 | | Seattle | 327 | 446 | 15,045 | [3,14] | 2015–22 | 1,497 | 54 | 1,499 | 1.06 | 78 | | Long Beach | 310 | 1103 | 39,618 | [4,19] | 2014–22 | 2,695 | 146 | 3,826 | 1.90 | 86 | | Alameda | 164 | 730 | 47,981 | [2,15] | 2015–22 | 672 | 251 | 6,452 | 2.04 | 67 | | King County | 132 | 247 | 20,552 | [2,10] | 2016–22 | 223 | 196 | 5,358 | 1.00 | 78 | | Boston | 32 | 72 | 23,291 | [4,11] | 2021–22 | 299 | 63 | 1,422 | 1.98 | 77 | | TOTAL COUNT: 1,366 meetings, 3,579 hours transcribed, 6,892 summarization instances collected | | | | | | | | | | | net forums, with associated human-written summaries. **DialogSum** (Chen et al., 2021) contains 13k dialogs gathered from multiple websites. Medical consultations conducted through online chats have also been used to create consultation summaries (Zeng et al., 2020; Laleye et al., 2020; Moramarco et al., 2021; Gao and Wan, 2022). Our paper focuses on developing a dataset of naturallyoccurring meeting conversations to aid in the development of summarization systems from transcripts. ## 3 Creation Of **Meetingbank** There is a growing need to make public meetings more accessible and inclusive for citizens to engage with their local officials and have their voices heard. A 2020 report from the American Academy of Arts and Sciences2reveals that only 11% of Americans attend public meetings to discuss local issues. Organizations such as the CouncilDataProject.org and CodeforAmerica.org are working to improve the search infrastructure of public meetings. Creation of a city council meeting dataset could provide a valuable testbed for meeting summarizers, and an effective summarizer could make public meetings more accessible by allowing citizens to navigate meetings more efficiently, thus promoting citizen engagement. We begin by compiling a list of the top 50 cities in the U.S. by population3and narrow it down to include only cities that regularly release meetings 2**amacad.org/ourcommonpurpose/recommendations** 3**www.infoplease.com/us/cities/** top-50-cities-us-population-and-rank with accompanying minutes documents, and have downloadable videos on their city council websites. An example of a public meeting and its minutes can be seen in Figure 1. We consult with our legal team and reach out to city councils when necessary to ensure compliance with licensing and data policies.4 Our dataset for this release includes **1,366** meetings from six cities or municipalities spanning over a decade, including Seattle, Washington; King County, Washington; Denver, Colorado; Boston, Massachusetts; Alameda, California; and Long Beach, California. Using minutes documents as-is for the development of summarization systems can be challenging. This is because minutes are often provided in PDF format and do not always align with the flow of meeting discussions. For instance, minutes may include a section on the Mayor's update that provides detailed information on appointments of officers, but this is only briefly mentioned in the meeting. In general, *minutes* are more formal and comprehensive records of meetings, including information such as the date, location, attendees, summary of main points discussed, decisions made, and action items assigned. Minutes are distributed to the stakeholders after the meeting. In contrast, *meeting summaries* tend to be shorter and less formal, focusing on the key points discussed in a meeting. We propose a divide-and-conquer strategy for 4For example, we have excluded the City of San Francisco from our dataset as the city has advised us that meeting videos may be reposted or edited with attribution, but minutes and agenda are official public documents that are not permitted to be reposted or edited. creating reference meeting summaries. It involves dividing lengthy meetings into segments, aligning them with their corresponding summaries from minutes documents, and keeping the segments simple for easy assembly of a meeting summary. To start, we extract a list of Council Bill (CB) numbers discussed at the meeting by parsing the minutes and city council websites.5 For each bill, we then identify a short description that summarizes its content, which serves as the reference summary. Next, we use the bill number to obtain the corresponding meeting segment, including its start and end time, by referencing the index of the meeting on the city council website (see Figure 1). The transcript of that segment serves as the source text for the summarizer. After filtering out noisy and too short segments,6 we have a total of **6,892 segment-level** instances in our dataset (Table 1). We use Speechmatics.com's speech-to-text API to automatically transcribe 3,579 hours of meetings, an order of magnitude larger than existing datasets. Our transcripts include word-level time alignment, casing, punctuation, and speaker diarization. City council meetings range from 2 to 19 speakers, with an average duration of 2.6 hours and 28,358 tokens per transcript. On average, across all cities, a meeting segment has 2,892 tokens in the transcript and 87 tokens in the summary. The resulting compression rate is 97%. For every council meeting, we collect the following information, represented using their attribute name and sample value:7 ![4_image_0.png](4_image_0.png) 7. Type of the ID number ("*Ordinance*") 8. Start and end times in the video where the topic was discussed ("*00:06:24*" to "*00:18:19*") 9. Full transcript of the video, along with start and end points of each segment of the meeting (Figure 2) 10. Reference summary for each meeting segment ## 4 Data Analysis We measure the level of abstraction in meeting summaries by quantifying the amount of reused text. Higher abstraction poses more challenges to the meeting summarization systems. We employ two common measures, C*overage* and D*ensity* (Grusky et al., 2018), to evaluate segment-level reference summaries. Results are illustrated in Figure 3, with coverage on x-axis and density on y-axis. The C*overage* score measures the percentage of summary words that appear in the source transcript. E.g., a summary of 10 words that includes 8 words from the source transcript and 2 new words has a coverage score of 0.8 (= 8/10). As shown in the figure, the C*overage* score for city council meeting summaries is in the range of 0.7-0.9 for most cities. | ROUGE | BLEU + MET. | EMBEDDINGS | QA | SUMM | | | | | | | |-------------|---------------|--------------|-------|--------|-------|--------|--------|---------|--------|--------| | MODEL | R-1 | R-2 | R-L | R-We | BLEU | METEOR | BERTS. | MoverS. | QAEval | LEN. | | Extr Oracle | 61.82 | 46.60 | 52.61 | 55.60 | 22.99 | 52.35 | 69.54 | 63.15 | 21.69 | 64.89 | | Lead-3 | 28.15 | 19.53 | 25.75 | 23.77 | 7.90 | 23.53 | 50.20 | 54.56 | 9.62 | 40.79 | | LexRank | 24.61 | 10.68 | 19.06 | 15.98 | 5.86 | 17.70 | 48.55 | 53.23 | 6.53 | 53.70 | | TexRank | 30.25 | 15.97 | 24.37 | 21.91 | 9.16 | 22.10 | 52.32 | 54.65 | 8.33 | 61.81 | | BART w/o FT | 31.02 | 16.76 | 23.93 | 23.11 | 8.07 | 16.63 | 53.04 | 53.91 | 13.63 | 140.65 | | HMNet | 50.55 | 34.22 | 45.07 | 45.05 | 13.93 | 46.80 | 66.38 | 60.34 | 12.96 | 50.44 | | Longformer | 59.89 | 48.23 | 55.66 | 56.15 | 40.04 | 50.92 | 75.31 | 65.27 | 23.54 | 82.86 | | BART | 62.81 | 51.66 | 58.84 | 59.32 | 41.46 | 53.24 | 77.17 | 66.74 | 26.87 | 89.46 | | Pegasus | 68.54 | 59.28 | 66.09 | 65.75 | 33.29 | 70.24 | 80.70 | 70.44 | 27.13 | 49.90 | | DialogLM | 70.30 | 60.12 | 67.54 | 67.55 | 45.42 | 66.44 | 81.61 | 71.56 | 25.75 | 66.36 | | GPT3-D3 | 36.37 | 16.95 | 26.82 | 26.14 | 8.80 | 25.41 | 56.53 | 55.61 | 10.88 | 60.41 | Table 2: Evaluation of state-of-the-art summarization systems on the test split of our city council dataset. The final column shows the average length of system summaries, which are generated by each individual summarizer using their default settings. This suggests that, unlike news, these meeting summaries tend to include discussion points verbatim rather than performing abstraction. Given a compression rate of over 90%, an effective summarizer should focus on accurately identifying content to be included in the summary. A D*ensity* score evaluates how much a summary can be characterized as a set of extractive fragments. For example, a summary of 10 words made up of two extractive fragments of length 1 and 6 and three new words would have a density score of 3.7 = (12 + 62)/10. A summary with long consecutive text fragments taken from the source transcript would yield a high density score. We observe that Seattle and Boston have the highest density scores among all cities studied, while Denver has the lowest score, indicating a high degree of editing is performed to produce the minutes for Denver. We note that certain resolutions and ordinances are read out plainly at the council meetings and included in the minutes, making the summaries often have higher density scores than those of news documents. The C*overage* and D*ensity* measures can be influenced by a range of factors such as the length and complexity of meetings and the preferences of the minute-takers. The diversity of meeting summaries highlights the complexity of this task. ## 5 Performance Of Existing Systems We evaluate state-of-the-art summarization systems on city council meetings, focusing on segments of the meetings rather than entire transcripts due to the length constraint imposed by abstractive summarizers. We split our dataset into train, validation and test sets, containing 5169, 861, 862 instances respectively. Each summarizer is given the transcript of a meeting segment and tasked with generating a concise summary. The results are reported for the test set of our meeting dataset. Extractive. Our extractive methods include the Oracle, LEAD, **LexRank** and **TextRank** (Erkan and Radev, 2004; Mihalcea and Tarau, 2004). The Extractive Oracle8selects the highest-scoring sentences from the input transcript, adding one sentence at a time until the combined R1 and R2 score can no longer be improved. The **LEAD-N** baseline selects the first N sentences of the input. **LexRank** and TextRank, both graph-based methods, determine the importance of sentences by analyzing their centrality in the graph structure. Both methods are set to extract two sentences from a transcript segment, which is the average number of sentences in the reference summaries. Abstractive with fine-tuning. We investigate five best-performing neural abstractive summarizers. These include **BART**-large (Lewis et al., 2020), a denoising autoencoder that is trained to reconstruct original text from corrupted input, **Pegasus** (Zhang et al., 2020a), a model that is trained to regenerate missing key sentences, **Longformer** (Beltagy et al., 2020), a model designed to handle long sequences 8**github.com/pltrdy/extoracle_summarization** | TEST SET (ALL) | TEST SET (BY CITY) | | | | | | | | | | |------------------|----------------------|-------|-------|-------|--------|--------|---------|---------|--------|-------| | TRAIN | #Inst. | R-1 | R-2 | R-L | L.B. | Denver | Seattle | Alameda | Boston | K.C. | | w/o L.B. | 3,157 | 60.36 | 48.57 | 56.48 | 21.30↓ | 1.95↑ | 4.39↑ | 1.72↓ | 3.92↓ | 3.71↓ | | w/o Denver | 3,951 | 58.74 | 46.90 | 54.31 | 2.84↓ | 32.43↓ | 1.96↓ | 0.22↓ | 0.22↓ | 1.64↑ | | w/o Seattle | 4,115 | 53.42 | 40.05 | 48.28 | 3.62↓ | 3.34↑ | 31.12↓ | 2.19↓ | 1.06↓ | 2.79↓ | | w/o Alameda | 4,698 | 61.54 | 50.31 | 57.48 | 2.41↓ | 4.07↑ | 3.00↑ | 19.32↓ | 2.54↓ | 1.68↓ | | w/o Boston | 4,936 | 62.70 | 51.62 | 58.82 | 1.21↑ | 0.75↑ | 7.52↓ | 1.02↑ | 42.82↓ | 4.96↑ | | w/o K.C. | 4,988 | 63.60 | 52.71 | 59.87 | 3.60↓ | 3.32↑ | 4.37↑ | 6.14↓ | 3.76↓ | 11.6↓ | through windowed attention, **DialogLM** (Zhong et al., 2022), a summarizer developed for summarizing long dialogues and pretrained using windowbased denoising and **HMNet** (Zhu et al., 2020), a hierarchical model that uses role vectors to distinguish between speakers. We evaluate **BART**-large with and without fine-tuning on our dataset, and compare the results to other models. GPT-3 with prompting. Large language models like GPT-3, T5 and PaLM (Brown et al., 2020; Raffel et al., 2020; Chowdhery et al., 2022) have developed advanced capabilities due to increased computation, parameters, training data size (Wei et al., 2022). When prompted, GPT-3 can generate a summary of a source text by identifying the most important information. The **text-davinci-003** version of GPT-3 is used in this study, with a prompt asking the model to summarize the text in two sentences (Goyal et al., 2022). Evaluation Metrics. We use a variety of automatic evaluation metrics to assess the quality of transcript summaries. These metrics are broadly grouped into three categories: (a) traditional metrics comparing system and reference summaries based on lexical overlap, including ROUGE (Lin, 2004), ROUGEwe (w/ embeddings), BLEU (Post, 2018) and METEOR (Banerjee and Lavie, 2005); (b) new metrics making use of contextualized embeddings to measure semantic similarity, e.g., BertScore (Zhang et al., 2020b) and MoverScore (Zhao et al., 2019); (c) question answering-based metrics, where the hypothesis is that high-quality summaries should contain informative content and act as a surrogate for the original document in satisfying users' information needs. We leverage summarization evaluation toolkits provided by Fabbri et al.(2021), SacreBLEU (Post, 2018), QAEval (Deutsch et al., 2021) and SummerTime (Ni et al., 2021) to report results using these metrics. Table 2 shows our experimental results. We observe that the Extractive Oracle yields a high R-2 F-score of 46.60%, indicating that the content of reference summaries mostly comes from the source transcripts, and extractive summarization methods could be promising. However, it would be desirable to develop more sophisticated methods than LexRank and TextRank, despite their outstanding performance on news articles, they do not perform well on this task. We find that DialogLM performs the best among abstractive summarizers. This is not surprising as it is designed for summarizing long dialogues. Pegasus also demonstrates strong performance, its results are on par with those of DialogLM. Fine-tuning BART on in-domain data yields substantial improvement on its performance. Finally, GPT-3 with prompting does not perform well according to automatic metrics, but we have interesting findings during human assessment (§7).9 ## 6 City-By-City Analysis We investigate the characteristics that make effective training instances for meeting summarization by conducting a series of ablations. We begin by | Informativeness: | How well does the summary capture the main points of the meeting segment? A good summary should contain all and only the important information of the source. | |--------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Factuality: | Are the facts provided by the summary consistent with facts in the meeting segment? A good summary should reproduce all facts accurately and not make up untrue information. | | Fluency: | Consider the individual sentences of the summary, are they well-written and grammaticall? | | Coherence: | Consider the summary as a whole, does the content fit together and sound natural? A good summary should not just be a collection of related information, but should build from sentence to sentence to a coherent body of information about a topic. | | Redundancy: | Does the summary contain redundant content? A good summary should not have unnecessary word or phrase repetitions in a sentence or semantically similar sentences. | removing all training instances from a single city and using the remaining instances to fine-tune the BART summarizer. The results are shown in Table 3 (left panel), where we present the R-1, R2, and R-L F-scores. We find that although the City of Seattle only contributes a moderate number of training instances, removing them has led to a substantial decrease in summarization performance, resulting in an R-2 F-score of 40.05%. It suggests that these training instances are quite effective and the City Council of Seattle might have implemented a better practice of producing highquality meeting minutes compared to other cities. We evaluate the performance of the BART summarizer on a city-by-city basis. We show the variance in R-2 F-scores for each test city when training instances from the same city are included vesus when they are excluded, as seen in the right panel of Table 3. For instance, we observe a performance drop (↓) of 32.43% for the City of Denver when all training instances from the same city are removed from fine-tuning.10 We observe that Seattle, Boston, and Denver benefit more from fine-turning using same-city training data. Particularly, Seattle and Boston have shorter source transcripts and their reference summaries tend to reuse texts from the source. It suggests that different cities may have varying levels of discussions in council meetings and different styles of meeting minutes, and that training instances from the same city are crucial for achieving the best performance. ## 7 Human Evaluation We evaluate the performance of seven state-of-theart summarization systems, including fine-tuned abstractive models HMNet, BART, Pegasus, **DialogLM**, 10We fine-tune the BART summarizers for the same number of steps in both cases to mitigate the impact of varying number of training instances. GPT-3 with prompting, and traditional extractive models **LexRank** and **LEAD** to best assess the effectiveness of system-generated meeting summaries. All abstractive models have been fine-tuned on the train split of our city councils dataset to achieve the best possible results. To ensure high quality in the assessment of summaries, we have worked with **iMerit.net**, a labor sourcing company, to recruit experienced evaluators from the U.S. and India to perform annotations. The workers are registered on **Appen.com**, a crowdsourcing platform, to complete the tasks and deliver results. A total of three workers from the United States and six workers from India participate in our evaluations, including pilot annotations.11 The workers are asked to watch a video segment, typically 30 minutes or less, read the transcript, and then evaluate the quality of each system summary based on five criteria: informativeness, *factuality*, fluency, *coherence*, and *redundancy*. These criteria are outlined in Table 4. Importantly, summaries are presented in a random order to prevent workers from making assumptions about quality based on the order they are presented. In Table 5, we present the performance of summarization systems on 200 randomly selected instances. A 5-point Likert scale is used to evaluate each criterion. The scores are then averaged, and standard deviation is also reported. We find that among the five criteria, redundancy is the least of concern. Furthermore, we observe that abstractive systems perform stronger than extractive systems. The best-performing abstractive system is **Pegasus**. We believe its effectiveness is attributed to the pretraining method of masking key sentences within 11All workers have excellent English proficiency, with U.S. workers being native speakers. After a pilot annotation, we decide to work with only U.S. workers due to their high quality of work. They are compensated at $27.50/hr. | EXTRACTIVE | ABSTRACTIVE W/ FINETUNING | PROMPTING | | | | | | |-----------------|-----------------------------|-------------|-----------|-----------|-----------|-----------|-----------| | CRITERION | LEAD | LexRank | HMNet | BART | DialogLM | Pegasus | GPT-3 | | Informativeness | 1.72±1.22 | 1.90±1.15 | 2.44±1.15 | 3.43±1.13 | 3.50±1.15 | 3.65±1.10 | 3.74±1.17 | | Factuality | 1.92±1.35 | 2.42±1.31 | 2.65±1.18 | 3.45±1.19 | 3.58±1.09 | 3.79±1.13 | 3.82±1.09 | | Fluency | 2.89±1.50 | 3.22±1.32 | 2.78±1.32 | 3.58±1.13 | 3.47±1.14 | 3.71±1.23 | 4.52±0.83 | | Coherence | 2.14±1.34 | 2.70±1.39 | 2.74±1.39 | 3.64±1.15 | 3.72±1.13 | 3.88±1.13 | 4.41±1.00 | | Redundancy | 3.79±1.38 | 3.53±1.40 | 3.42±1.40 | 3.54±1.41 | 3.96±1.35 | 4.15±1.25 | 4.57±0.75 | | AVERAGE SCORE | 2.49 | 2.75 | 2.81 | 3.53 | 3.65 | 3.84 | 4.21 | a document and using the remaining sentences to regenerate them, making it particularly well-suited for this task and effective at identifying important content from the transcripts. We find that **GPT-3** achieves the highest overall score of 4.21 according to human evaluations across all criteria. This aligns with recent studies that demonstrate GPT-3's near-human performance in news summarization (Goyal et al., 2022; Zhang et al., 2023). On our meeting dataset, **GPT-3** shows exceptional performance in terms of fluency and coherence, receiving scores of 4.52 and 4.41 respectively. However, its results are less impressive in terms of informativeness and factuality with scores of 3.74 and 3.82, but still on par with the best abstractive model, **Pegasus**. Our findings suggest that meeting summarization solutions should continue to focus on capturing the main discussion points and staying true to the original content. ## 8 Conclusion We created a benchmark dataset from city council meetings and tested various summarization systems including extractive, abstractive with fine-tuning, and GPT-3 with prompting on this task. Our findings indicate that GPT-3 is well received in human assessments, but it falls short in terms of informativeness and factual consistency. Our MeetingBank dataset could be a valuable testbed for researchers designing advanced meeting summarizers and for extracting structure from meeting videos. ## 9 Limitations We present a new dataset for meeting summarization that has the potential to improve the efficiency and effectiveness of meetings. However, we note that the dataset is limited to city council meetings from U.S. cities over the past decade and licensing issues have restricted our ability to include certain city council meetings in the dataset. For example, we contacted the City Council of San Francisco and were informed that they do not allow the redistribution of meeting minutes. Moreover, our dataset does not include non-verbal cues such as eye gazes, gestures and facial expressions, which may make it less suitable for developing summarization systems that rely on these cues. Despite these limitations, we believe that the dataset is of high quality and will be a valuable resource for the development of meeting summarization systems. ## 10 Ethical Considerations The city council meetings included in this dataset are publicly accessible. We obtain meeting videos, minutes documents, and other metadata from publicly available sources. We consult with our legal team and reach out to city councils as necessary to ensure compliance with licensing and data policies. We release this dataset to facilitate the development of meeting summarization systems and have made efforts to ensure that the dataset does not include confidential information. Our dataset is intended for research purposes only. ## Acknowledgements We are grateful to the reviewers for their insightful feedback, which has helped enhance the quality of our paper. This research has been partially supported by the NSF CAREER award, \#2303655. ## References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Samy Bengio and Hervé Bourlard. 2004. Machine Learning for Multimodal Interaction: First International Workshop. Springer Berlin, Heidelberg. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, and Amanda Askell et al. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, and Vasilis Karaiskos et al. 2006. The AMI meeting corpus: A pre-announcement. In *Machine* Learning for Multimodal Interaction, pages 28–39. Springer Berlin Heidelberg. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. SummScreen: A dataset for abstractive screenplay summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8602–8615, Dublin, Ireland. Association for Computational Linguistics. Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. DialogSum: A real-life scenario dialogue summarization dataset. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 5062–5074, Online. Association for Computational Linguistics. Sangwoo Cho, Franck Dernoncourt, Tim Ganter, Trung Bui, Nedim Lipka, Walter Chang, Hailin Jin, Jonathan Brandt, Hassan Foroosh, and Fei Liu. 2021. StreamHover: Livestream transcript summarization and annotation. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6457–6474, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, and Dong Yu. 2022. Toward unifying text segmentation and long document summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 106– 118, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, and Sebastian Gehrmann et al. 2022. PaLM: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Ann Clifton, Sravana Reddy, Yongze Yu, Aasish Pappu, Rezvaneh Rezapour, Hamed Bonab, Maria Eskevich, Gareth Jones, Jussi Karlgren, Ben Carterette, and Rosie Jones. 2020. 100,000 podcasts: A spoken English document corpus. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5903–5917, Barcelona, Spain (Online). International Committee on Computational Linguistics. Daniel Deutsch, Tania Bedrax-Weiss, and Dan Roth. 2021. Towards question-answering as an automatic metric for evaluating the content quality of a summary. *Transactions of the Association for Computational Linguistics*, 9:774–789. Günes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. *Journal of Artificial Intelligence Research*. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Krysci ´ nski, Bryan Mc- ´ Cann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. Transactions of the Association for Computational Linguistics, 9:391–409. Jack Flynn. 2022. 28 incredible meeting statistics: Virtual, zoom, in-person meetings and productivity. https://www.zippia.com/advice/meeting-statistics/. Mingqi Gao and Xiaojun Wan. 2022. DialSummEval: Revisiting summarization evaluation for dialogues. In *Proceedings of the 2022 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5693–5709, Seattle, United States. Association for Computational Linguistics. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics. Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature. arXiv preprint arxiv:2210.09932. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of gpt-3. *arXiv preprint arXiv:2209.12356*. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Association for Computational Linguistics. Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1419–1436, Online. Association for Computational Linguistics. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI meeting corpus. In *Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing* (ICASSP). Misha Khalman, Yao Zhao, and Mohammad Saleh. 2021. ForumSum: A multi-speaker conversation summarization dataset. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 4592–4599, Punta Cana, Dominican Republic. Association for Computational Linguistics. Jia Jin Koay, Alexander Roustai, Xiaojin Dai, Dillon Burns, Alec Kerrigan, and Fei Liu. 2020. How domain terminology affects meeting summarization performance. In *Proceedings of the 28th International* Conference on Computational Linguistics, pages 5689–5695, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jia Jin Koay, Alexander Roustai, Xiaojin Dai, and Fei Liu. 2021. A sliding-window approach to automatic creation of meeting minutes. In *Proceedings of the* 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 68–75, Online. Association for Computational Linguistics. Anastassia Kornilova and Vladimir Eidelman. 2019. BillSum: A corpus for automatic summarization of US legislation. In *Proceedings of the 2nd Workshop* on New Frontiers in Summarization, pages 48–56, Hong Kong, China. Association for Computational Linguistics. Jonas Kratochvil, Peter Polak, and Ondrej Bojar. 2020. Large corpus of Czech parliament plenary hearings. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6363–6367, Marseille, France. European Language Resources Association. Fréjus A. A. Laleye, Gaël de Chalendar, Antonia Blanié, Antoine Brouquet, and Dan Behnamou. 2020. A French medical conversations corpus annotated for a virtual patient dialogue system. In *Proceedings of* the Twelfth Language Resources and Evaluation Conference, pages 574–580, Marseille, France. European Language Resources Association. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Manling Li, Lingyu Zhang, Heng Ji, and Richard J. Radke. 2019. Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2190– 2196, Florence, Italy. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411, Barcelona, Spain. Association for Computational Linguistics. Francesco Moramarco, Alex Papadopoulos Korfiatis, Aleksandar Savkov, and Ehud Reiter. 2021. A preliminary study on evaluating consultation notes with post-editing. In *Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval)*, pages 62–68, Online. Association for Computational Linguistics. Gabriel Murray, Giuseppe Carenini, and Raymond Ng. 2010. Generating and validating abstracts of meeting conversations: a user study. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Anna Nedoluzhko, Muskaan Singh, Marie Hledíková, Tirthankar Ghosal, and Ondˇrej Bojar. 2022. ELITR minuting corpus: A novel dataset for automatic minuting from multi-party meetings in English and Czech. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3174– 3182, Marseille, France. European Language Resources Association. Ansong Ni, Zhangir Azerbayev, Mutethia Mutuma, Troy Feng, Yusen Zhang, Tao Yu, Ahmed Hassan Awadallah, and Dragomir Radev. 2021. SummerTime: Text summarization toolkit for non-experts. In *Proceedings of the 2021 Conference on Empirical Methods* in Natural Language Processing: System Demonstrations, pages 329–338, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Tatsuro Oya, Yashar Mehdad, Giuseppe Carenini, and Raymond Ng. 2014. A template-based abstractive meeting summarization: Leveraging summary and source text relationships. In *Proceedings of the 8th* International Natural Language Generation Conference (INLG), pages 45–53, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics. Pinelopi Papalampidi, Frank Keller, Lea Frermann, and Mirella Lapata. 2020. Screenplay summarization using latent narrative structure. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 1920–1933, Online. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In *Proceedings of the Third Conference on* Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Steve Renals, Thomas Hain, and Herve Bourlard. 2007. Recognition and understanding of meetings the ami and amida projects. In *2007 IEEE Workshop on Automatic Speech Recognition Understanding (ASRU)*, pages 238–247. Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Tixier, Polykarpos Meladianos, Michalis Vazirgiannis, and Jean-Pierre Lorré. 2018. Unsupervised abstractive meeting summarization with multi-sentence compression and budgeted submodular maximization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 664–674, Melbourne, Australia. Association for Computational Linguistics. Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, and Fei Liu. 2022. Towards abstractive grounded summarization of podcast transcripts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4407–4418, Dublin, Ireland. Association for Computational Linguistics. Paul Tardy, David Janiszek, Yannick Estève, and Vincent Nguyen. 2020. Align then summarize: Automatic alignment methods for summarization corpus creation. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6718– 6724, Marseille, France. European Language Resources Association. Lu Wang and Claire Cardie. 2013. Domain-independent abstract generation for focused meeting summarization. In *Proceedings of the 51st Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 1395–1405, Sofia, Bulgaria. Association for Computational Linguistics. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. *Transactions* on Machine Learning Research. Survey Certification. Klaus Zechner. 2002. Automatic summarization of open-domain multiparty dialogues in diverse genres. Computational Linguistics, 28(4):447–485. Guangtao Zeng, Wenmian Yang, Zeqian Ju, Yue Yang, Sicheng Wang, Ruisi Zhang, Meng Zhou, Jiaqi Zeng, Xiangyu Dong, Ruoyu Zhang, Hongchao Fang, Penghui Zhu, Shu Chen, and Pengtao Xie. 2020. MedDialog: Large-scale medical dialogue datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9241–9250, Online. Association for Computational Linguistics. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In *International Conference on Machine Learning*, pages 11328–11339. PMLR. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. BERTScore: Evaluating text generation with BERT. In *International Conference on Learning Representations*. Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B. Hashimoto. 2023. Benchmarking large language models for news summarization. *arXiv preprint arxiv:2301.13848*. Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed Awadallah, Dragomir Radev, and Rui Zhang. 2022. Summn: A multi-stage summarization framework for long input dialogues and documents. In *Proceedings of the 60th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1592– 1604, Dublin, Ireland. Association for Computational Linguistics. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In *Proceedings* of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics. Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In *Proceedings of the 36th AAAI Conference* on Artificial Intelligence. Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021. Qmsum: A new benchmark for querybased multi-domain meeting summarization. arXiv preprint arxiv:2104.05938. Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5927–5934, Online. Association for Computational Linguistics. Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 194– 203, Online. Association for Computational Linguistics. ## A Experimental Settings Our implementation details and hyperparameter settings for both extractive systems and abstractive systems with fine-tuning are shown in Table 6. We use **text-davinci-003** version of GPT-3 in our experiments. We follow the convention of Goyal et al. (2022) and use the following prompt asking the model to summarize a transcript in two sentences: Article:{{article}} Summarize the above article in N sentences. ## B Comparison To Icsi/Ami By introducing a corpus, we aim to spur research and development in the area of meeting summarization. However, meetings often pertain to specialized domains and exhibit unique structures. Our preliminary experiments suggest that a BART summarizer fine-tuned for our dataset does not perform optimally on the ICSI/AMI datasets. In particular, the ICSI meetings pose a challenge as they are research seminars conducted by a group of speech researchers, whereas our dataset is collected from city councils in the U.S. Extractive Oracle We use the implementation provided by Paul Tardy: github.com/pltrdy/extoracle_summarization (a) "-length_oracle" sets the output to have the same number of sentences as the reference summary. (b) "-method greedy -length 999" allows the greedy algorithm to select an optimal number of sentences that yield the highest (R1+R2) scores. In this paper, we report results using option (b). LexRank and **TextRank** We use SummerTime's implementation of LexRank and TextRank with default parameters. https://github.com/Yale-LILY/SummerTime For each meeting segment, 2 sentences are extracted. BART The BART model is initialized using bart-large-cnn: The default model parameters are used, with some of the important ones listed below. - max input sequence length: 1,024 tokens - min output length: 56 tokens - max output length: 142 tokens - beam width: 4 - length penalty: 2.0 - initial learning rate: 2.5e-6 Pegasus Pegasus is initialized using google/pegasus-xsum - max sequence length: 512 - max output length: 64 - beam width: 8 - length penalty: 0.6 - initial learning rate: 2.5e-6 Longformer Longformer is initialized using patrickvonplaten/ longformer2roberta-cnn_dailymail-fp16 - max sequence length: 4,098 - min output length: 56 - max output length: 142 - beam width: 1 - length penalty: 1.0 - Initial Learning Rate: 2.5e-6 HMNet We use the implementation of Zhu et al. (2020). HMNet is initialized using HMNet-pretrained - max sequence length: 8300 - min output length: 10 - max output length: 300 - beam width: 6 - initial learning rate: 1e-4 DialogLM We use the original DialogLM source implementation. - max sequence length: 5,632 - min output length: 10 - max output length: 300 - beam width: 6 - Initial Learning Rate: 7e-5 Table 6: Implementation details and hyperparameter settings for extractive systems and abstractive systems. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Section 9 A2. Did you discuss any potential risks of your work? Not applicable. Left blank. ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and Section 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Section 3 B1. Did you cite the creators of artifacts you used? Not applicable. Left blank. ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 10 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 10 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Section 10 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 3 and 4 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Sections 3 and 4 ## C ✓ **Did You Run Computational Experiments?** Section 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Not applicable. Left blank. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Supplementary Materials ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 7 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Supplementary Materials D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Section 7 ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Section 7 and Supplementary Materials ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Section 7 D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
ping-etal-2023-uniex
{U}ni{EX}: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective
https://aclanthology.org/2023.acl-long.907
We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis. Our approach converts the text-based IE tasks as the token-pair problem, which uniformly disassembles all extraction targets into joint span detection, classification and association problems with a unified extractive framework, namely UniEX. UniEX can synchronously encode schema-based prompt and textual information, and collaboratively learn the generalized knowledge from pre-defined information using the auto-encoder language models. We develop a traffine attention mechanism to integrate heterogeneous factors including tasks, labels and inside tokens, and obtain the extraction target via a scoring matrix. Experiment results show that UniEX can outperform generative universal IE models in terms of performance and inference-speed on 14 benchmarks IE datasets with the supervised setting. The state-of-the-art performance in low-resource scenarios also verifies the transferability and effectiveness of UniEX.
UniEX: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective Ping Yang1∗ Junyu Lu12∗ Ruyi Gan1∗ Junjie Wang3 **Yuxiang Zhang**3 Jiaxing Zhang1† **Pingjian Zhang**2† 1International Digital Economy Academy 2South China University of Technology 3Waseda University {yangping, lujunyu, ganruyi}@idea.edu.cn wjj1020181822@toki.waseda.jp, joel0495@asagi.waseda.jp pjzhang@scut.edu.cn ## Abstract We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis. Our approach converts the text-based IE tasks as the token-pair problem, which uniformly disassembles all extraction targets into joint span detection, classification and association problems with a unified extractive framework, namely UniEX. UniEX can synchronously encode schema-based prompt and textual information, and collaboratively learn the generalized knowledge from pre-defined information using the auto-encoder language models. We develop a traffine attention mechanism to integrate heterogeneous factors including tasks, labels and inside tokens, and obtain the extraction target via a scoring matrix. Experiment results show that UniEX can outperform generative universal IE models in terms of performance and inference-speed on 14 benchmarks IE datasets with the supervised setting. The state-of-the-art performance in low-resource scenarios also verifies the transferability and effectiveness of UniEX. ## 1 Introduction Information extraction (IE) aims at automatically extracting structured information from unstructured textual sources, covering a wide range of subtasks such as named entity recognition, relation extraction, semantic role labeling, and sentiment analysis (Muslea et al., 1999; Grishman, 2019). However, the variety of subtasks build the isolation zones between each other and form their own dedicated models. Fig 1 (a) presents that the popular IE approaches handle structured extraction by the addition of task-specific layers on top of pretrained language models (LMs) and a subsequent *Equal Contribution. †Corresponding Author. (a) Task-specialized IE ![0_image_0.png](0_image_0.png) (b) Generative Universal IE (TANL, UIE) ![0_image_1.png](0_image_1.png) ![0_image_2.png](0_image_2.png) (c) Extractive Universal IE (UniEX) ``` Schema-based prompt Extraction Target A, B, C, ... Scoring Matrix Figure 1: (a) Task-specific IE methods: isolated structures and schemas. (b) Typical generative universal IE: unified modeling via text or structure generation. (c) Our extractive universal IE: unified modeling via traffine attention mechanism and auto-encoder LMs. ``` fine-tuning of the conjoined model (Lample et al., 2016; Luo et al., 2020; Wei et al., 2020; Ye et al., 2022). The isolated architectures and chaotic situation prevents enhancements from one task from being applied to another, which hinders the effective latent semantics sharing such as label names, and suffer from inductive bias in transfer learning (Paolini et al., 2020). With powerful capabilities in knowledge sharing and semantic generalization, large-scale LMs bring the opportunity to handle multiple IE tasks using a single framework. As shown in Fig 1 (b), by developing sophisticated schema-based prompt and structural generation specification, the IE tasks can be transformed into text-to-text and text-to-structure formats via large-scale generative LMs (Dong et al., 2019; Paolini et al., 2020; Lu et al., 2022) such as T5 (Raffel et al., 2020a). Moreover, the universal IE frameworks can learn general knowledge from multi-source prompts, which is 16424 beneficial for perceiving unseen content in lowresource scenarios. Despite their success, these generative frameworks suffer from their inherent problems, which limit their potential and performance in universal modeling. Firstly, the schemabased prompt and contextual information are synthetically encoded for generating the target structure, which is not conducive to directly leveraging the position information among different tokens. Secondly, the generative architecture utilizes the token-wise decoder to obtain the target structure, which is extremely time-consuming. The aforementioned issues prompt us to rethink the foundation of IE tasks. Fundamentally, we discover that the extraction targets of different IE tasks involve the determination of semantic roles and semantic types, both of which can be converted into span formats by the correlation of the inside tokens in the passage. For instance, an entity type is the boundary detection and label classification of a semantic role, while a relation type can be regarded as the semantic association between specific semantic roles. From this perspective, the IE tasks can be decoded using a span-extractive framework, which can be uniformly decomposed as several atomic operations: i) Span Detection, which locates the boundaries of the mentioned semantic roles; ii) Span Classification, which recognizes the semantic types of the semantic roles; iii) Span Association, which establishes and measures the correlation between semantic roles to determine semantic types. According to the above observation, we propose a new paradigm for universal IE, called Unified Extraction model (UniEX) as Figure 1 (c). Specifically, we first introduce a rule-based transformation to bridge various extraction targets and unified input formats, which leverages task-specific labels with identifiers as the schema-based prompt to learn general IE knowledge. Then, recent works (Liu et al., 2019a; Yang et al., 2022) state that the auto-encoder LMs with bidirectional context representations are more suitable for natural language understanding. Therefore, We employ BERT-like LMs to construct an extractive architecture for underlying semantic encoding. Finally, inspired by the successful application of span-decoder and biaffine network to decode entity and relation with a scoring matrix (Yu et al., 2020b; Li et al., 2020; Yuan et al., 2022), we introduce a triaffine attention mechanism for structural decoding, which jointly considers high-order interactions among multiple factors, including tasks, labels and inside tokens. Each triaffine scoring matrix is assigned to a demand-specific prompt for obtaining span-extractive objectives. Through extensive experiments on several challenging benchmarks of 4 main IE tasks (entity/relation/event/sentiment extraction), we demonstrate that compared with the state-of-the-art universal IE models and task-specific low-resource approaches, our UniEX achieves a substantial improvement in performance and efficiency with supervised, few-shot and zero-shot settings. Our main contributions are summarized as: - We develop an efficient and effective universal IE paradigm by converting all IE tasks into joint span classification, detection and association problem. - We introduce UniEX, a new unified extractive framework that utilizes the extractive structures to encode the underlying information and control the schema-based span decoding via the triaffine attention mechanism. - We apply our approach in low-resource scenarios, and significant performance improvements suggest that our approach is potential for attaching label information to generalized objects and transfer learning. Our code will be made publicly available. ## 2 Related Work Unified NLP Task Formats Since the prompttuning can improve the ability of language models to learn common knowledge and fix the gap across different NLP tasks, recent studies show the necessity of unifying all NLP tasks in the format of a natural language response to natural language input (Raffel et al., 2020b; Sanh et al., 2022; Wei et al., 2021). Previous unified frameworks usually cast parts of text problems as question answering (McCann et al., 2018) or span extraction (Keskar et al., 2019) tasks. TANL (Paolini et al., 2020) frames the structured prediction tasks as a translation task between augmented natural languages. By developing a text-to-text architecture, T5 (Raffel et al., 2020b) makes prompts to effectively distinguish different tasks and provide prior knowledge for multitask learning. UIE (Lu et al., 2022) uniformly models IE tasks with a textto-structure framework, which encodes different extraction structures via a structured extraction language, adaptively generates varying targets via a structural schema instructor. Although effective, such methods focus on generative styles and thus cannot be adapted to the knowledge selection for vast label-based models. It motivates us to design an efficient and effective universal IE method, where we develop unified Extraction (EX) formats and triaffine attention mechanism. Label Information Label semantics is an important information source, which carries out the related meaning induced from the data (Hou et al., 2020; Ma et al., 2022a; Mueller et al., 2022). The LTapNet (Hou et al., 2020) introduces the collapsed dependency transfer mechanism to leverage the semantics of label names for few-shot tagging tasks. LSAP (Mueller et al., 2022) improves the generalization and data efficiency of few-shot text classification by incorporating label semantics into the pre-training and fine-tuning phases of generative LMs. Together, these successful employments of label knowledge in low-resource setting motivates us to introduce label semantics into our unified inputs to handle few-shot and zero-shot scenarios. ## 3 Approaches Generally, there are two main challenges in universally modeling different IE tasks via the extractive architecture. Firstly, IE tasks are usually demand-driven, indicating that each pre-defined schema should correspond to the extraction of specific structural information. Secondly, due to the diversity of IE tasks, we need to resolve appropriate structural formats from the output sequence to accommodate different target structures, such as entity, relation and event. In this section, we outline how the UniEX exploits a shared underlying semantic encoder to learn the prompt and text knowledge jointly, and conduct various IE tasks in a unified text-to-structure architecture via the triaffine attention mechanism. ## 3.1 The Uniex Framework 3.1.1 Unified Input Formally, given the task-specific pre-defined schema and texts, the universal IE model needs to adaptively capture the corresponding structural information from the text indicated by the taskrelevant information. To achieve this, we formulate a unified input format consisting of task-relevant schema and text, as shown in Figure 2. To promote the sharing of generalized knowledge across different IE tasks, we choose to simply use the task-based and label-based schemas as prompt rather than elaborate questions, fill-in blanks or structural indicators. To achieve proper prompt representation, we introduce several special tokens [D-TOK], [C-TOK] and [A-TOK] as identifiers, uniformly replacing the corresponding schema representations in the input sentence. Here, [D-TOK] inherits the ability of [CLS] to capture the global semantic information. [C-TOK] and [A-TOK] inherit the ability of [SEP], thus remaining to use token representation to symbolize the connotation of subsequent schemas. Consider an input set denoted as (*s, x*), includes the following: i) taskbased schema sd for span detection, ii) label-based schemas sc for span classification and sa for span association, iii) one passage x = {x1*, . . . , x*Nx}. The input sentence with Ns = Nsd + Nsc + Nsa schemas and Nx inside tokens can be denoted as: $$x_{imp}=\left\{\left[\text{D-TOK}\right]^{i}s_{d}^{i}\right\}_{i=1}^{N_{sd}}\left\{\left[\text{C-TOK}\right]^{i}s_{c}^{i}\right\}_{i=1}^{N_{sc}}\tag{1}$$ $$\left\{\left[\text{A-TOK}\right]^{i}s_{a}^{i}\right\}_{i=1}^{N_{sa}}\text{[SEP]}x\text{[SEP]}.$$ ## 3.1.2 Backbone Network In our UniEX framework, we employ the BERTlike LMs as the extractive backbone, such as RoBERTa (Liu et al., 2019b) and ALBERT (Lan et al., 2020), to integrate the bidirectional modeled input xinp. Note that the unified input contains multiple labels, resulting in undesired mutual influence across different labels and leading to a misunderstanding of the correspondence between the label and its structural format during the decoding phase. Meanwhile, in some tasks, the large number of labels allows schemas to take up excessive locations, squeezing the space for text. Referring to the embedding methods in the UniMC (Yang et al., 2022), we address these issues from several perspectives, including position id and attention mask. Firstly, to avoid the information interference caused by the mutual interaction within label-based schemas, we constantly update the position id pos to tell apart intra-information in the label. In this way, the position information of label-relevant tokens is coequally treated based on their position embedding, and the refreshed location information for the first token of each label-based schema avoids the natural increase of the location id. Then, as shown in Figure 3, due to the detailed correlation among schema-based prompts in the IE tasks, we further ![3_image_0.png](3_image_0.png) 0 x 1 x 2 x 3 x 4 x 5 x 6 x ![3_image_1.png](3_image_1.png) introduce a schema-based attention mask matrix M*mask* in the self-attention calculation to control the flow of labels, ensuring that unrelated labels are invisible to each other. In particular, different entity, relation and event types are invisible to each other, while relation and event types can contact their bound entity types. Furthermore, we take the encoded hidden vector from the last Transformer-based layer, where we combine the special tokens part as the schema representations Hs ∈ R Ns×dand the passage tokens part as the text representations Hx ∈ R Nx×d with hidden size d. Hs, Hx = Encoder (xinp*, pos, M*mask) (2) ## 3.1.3 Triaffine Attention For Span Representation After obtaining the schema representations and text representations from the auto-encoder LM, the following challenge is how to construct a unified decoding format that is compatible with different IE structures, with the goal of adaptively exploiting schemas to control various extraction targets. Take the example in Figure 4, for the event extraction system, we locate the start and end indices of the words boundary "Dariues", "Ferguson" and "injure" as the semantic roles, categorized as the Agent, *Victim* and *Trigger* semantic types (entity/trigger) respectively, and collectively to the *Injure* semantic type (event). For the relation extraction system, we associate the semantic roles "Betsy Ross" and "Philadelphia" by attaching their intersecting information to the *Live in* semantic type (relation). In conjunction with the discussions in the Introduction, we consider two elements for universally modeling IE tasks as joint span detection, classification and association: I) Different extraction targets are presented in the form of span, relying on unified information carriers to accommodate various semantic roles and semantic types. II) The spanextractive architecture is necessary for establishing schema-to-text information interaction, which can adaptively extract schema-related semantic information from text. For the first proposal, we introduce two information carriers for decoding heterogeneous IE structures in a unified span format: 1. **Structural Table** indicates a rank-2 scoring matrix corresponding to a particular schema, which accommodates the semantic information required ![4_image_0.png](4_image_0.png) ## For Span-Extractive Parsing. 2. **Spotting Designator** Indicates The Location Of spans in the preceding structural table, which represent extraction targets corresponding to the particular schema. For the second proposal, we attempt to explore the internal interaction of the inside tokens by converting the text representation into span representation. Then, we apply two separate FFNs to create different representations (Hs x / He x ) for the start/end positions of the inside tokens. To further interact such multiple heterogeneous factors simultaneously, we define the deep triaffine transformation with weighted matrix W ∈ R d×d×d, which apply the triaffine attention to aggregate the schema-wise span representations by considering schema as queries as well as start/end of the inside tokens as keys and values. In this process, the triaffine transformation injects each schema information into the span representations and resolves the corresponding extraction targets. It creates a Ns × Nx × Nx scoring tensor S by calculating continuous matrix multiplication as following: $$\begin{array}{c}{{H_{x}^{s}=\mathrm{FFN}_{s}\left(H_{x}\right),}}\\ {{H_{x}^{e}=\mathrm{FFN}_{e}\left(H_{x}\right),}}\\ {{S=\sigma(\mathcal{W}\times_{1}H_{s}\times_{2}H_{x}^{s}\times_{3}H_{x}^{e}),}}\end{array}$$ $$(3)$$ where ×k is the matrix multiplication between input tensor and dimension-k of W. σ(∗) denotes the Sigmoid activation function. At this point, the tensor S provides a mapping score from the schema to internal spans of the text, where each rank-2 scoring matrix corresponding to a specific schema is the structural table. For the r-th structural table, the affine score of each span (*p, q*) that starts with p and ends with q can be denoted as S*r,p,q* ∈ [0, 1], while the affine score of a valid span in the structural table is the spotting designator. We divide all Ns structural tables into three parts according to the distribution of the schemas, among them, Nsd for span detection, Nsc for span classification, and Nsa for span association. For different schemas, we develop their spotting designators by following strategies: Span Detection: In particular, we usually use the structural table derived from the task-based schema representation for span detection, which can be obtained from the hidden state of the special token [CLS]. Since the [CLS] token is mutually visible to other schemas, the task-based schema representation can capture the span-related semantic information of the semantic roles from the task and label names. The spotting designators identify the start and end indices of the i-th semantic roles as (si, ei) using the axes. Span Classification: The label-based schema representations for entity/argument/trigger/event types are used for span classification. The spotting designators are identical with the span positions of the semantic roles, indicating that the semantic type of the i-th span can be identified by attaching to the (si, ei) position in the corresponding structural table. Span Association: The label-based schema representations for relation/sentiment types are used for span association. In this process, we model the potentially related semantic roles and correlate them to corresponding semantic types. The spotting designators locate at two interleaved positions associated with the semantic roles of the semantic type, that is, for the i-th and j-th spans, the extraction target is transformed to the identification of the (si, sj ) and (ei, ej ) positions in the corresponding structural table. Note that all span values in the structural table for label-based schemas are masked except for the spotting designators, because we only need to observe the semantic types and semantic association among the detected spans. Specifically, the spotting designators for span detection are the spans with q ≥ p, and the spotting designators for span classification and association are defined by the position consistency and interleaving of valid spans ## 3.2 Ex Training Procedure Given the input sentence xinp, We uniformly reformat different output targets as a rank-3 matrix Y , sharing the same spotting designators as the triaffine scoring matrix. Similarly, we denote the value of each valid span as Yr,p,q ∈ {0, 1}, with Y*r,p,q* = 1 denoting the desirable span for a groundtruth and Y*r,p,q* = 0 denoting the meaningless span for semantic role or semantic type. Hence it is a binary classification problem and we optimize our models with binary cross-entropy: $${\rm BCE}(y,\hat{y})=-(y\cdot\log(\hat{y})+(1-y)\cdot\log(1-\hat{y})),\tag{4}$$ $$\mathcal{L}=\sum_{r=1}^{N_{x}}\sum_{p=1}^{N_{x}}\sum_{q=1}^{N_{x}}\text{BCE}\left(Y_{r,p,q},S_{r,p,q}\right).\tag{5}$$ ## 4 Experiments To verify the effectiveness of our UniEX, we conduct extensive experiments on different IE tasks with supervised (high-resource), few-shot and zeroshot (low-resource) scenarios. ## 4.1 Experimental Setup For the supervised setting, we follow the preparation in TANL (Paolini et al., 2020) and UIE (Lu et al., 2022) to collect 14 publicly available IE benchmark datasets and cluster the wellrepresentative IE tasks into 4 groups, including entity, relation, event and structured sentiment extraction. In particular, for each group, we design a corresponding conversion regulation to translate raw data into the unified EX format. Then, for the few-shot setting, we adopt the popular datasets FewNERD (Ding et al., 2021) and Cross-Dataset (Hou et al., 2020) in few-shot entity extraction and domain partition as (Ma et al., 2022b). For the zero-shot setting, we use the common zero-shot relation extraction datasets WikiZSL (Chen and Li, 2021) and FewRel (Han et al., 2018) and follow the same process of data and label splitting as (Chia et al., 2022). Following the same evaluation metrics as all previous methods, we use span-based offset Micro-F1 with strict match criteria as the primary metric for performance comparison. Please refer to Appendix A for more details on dataset descriptions, unified EX input formats, metrics and training implementation. ## 4.2 Experiments On Supervised Settings In our experiment, under the high-resource scenario, we compare our approach with the state-ofthe-art generative universal IE architectures that provide a universal backbone for IE tasks based on T5 (Raffel et al., 2020a), including TANL (Paolini et al., 2020) and UIE (Lu et al., 2022). For a fair comparison, We only consider results without exploiting large-scale contexts and external knowledge beyond the dataset-specific information, and present the average outcomes if the baseline is conducted in multiple runs. The main results of UniEX and other baselines on 14 IE datasets are shown in Table 1. We can observe that: 1) By modeling IE as joint span detection, classification and association, and encoding the schema-based prompt and input texts with the triaffine attention mechanism, UniEX provides an effective universal extractive backbone for all IE tasks. The UniEX outperforms the universal IE models with approximate backbone sizes, achieving new state-of-the-art performance on almost all tasks and datasets. 2) The introduction of label-based schema facilitates the model learning task-relevant knowledge, while the triaffine scoring matrix establishes the correspondence between each schema and extraction targets. Obviously, the UniEX can better capture and share label semantics than using generative structures to encode underlying information. Meanwhile, triaffine transformation is a unified and cross-task adaptive operation, precisely controlling where to detect and which to associate in all IE tasks. Compared with the TANL and UIE, our approach achieves significant performance improvement on most datasets, with nearly 1.36% and 1.52% F1 on average, respectively. ## 4.3 Experiments On Low-Resource Scenarios To verify the generalization and transferability of UniEX in low-resource scenarios, we evaluate models under few-shot and zero-shot settings, respectively. In order to reduce the influence of noise caused by random sampling on the experiment results, we repeat the data/label selection processes for five different random seeds and report the averaged experiment results as previous works (Hou et al., 2020; Chia et al., 2022). We use the BERTbase (Devlin et al., 2019) as the UniEX backbone to align with other low-resource results. Firstly, we compare the UniEX with the competitive few-shot entity extraction models. For FewNERD, we compare the proposed approach to De- | Task | Dataset | Domain | Metric | TANL | UniEX | UIE | | |----------------------|-------------------|----------------------|--------------------|--------|---------|-------|-------| | 220M | 132M | 770M | | | | | | | ACE04 | News, Speech | Entity F1 | - | - | 86.52 | 87.12 | | | ACE05-Ent | News, Speech | Entity F1 | 84.90 | 85.96 | 85.52 | 87.02 | | | CoNLL03 | News | Entity F1 | 91.70 | 92.13 | 92.17 | 92.65 | | | GENIA | Biology | Entity F1 | 76.40 | 76.69 | - | - | | | Entity | | | | | | | | | Extraction | ACE05-Rel | News, Speech | Relation Strict F1 | 63.70 | 63.64 | 64.68 | 66.06 | | CoNLL04 | News | Relation Strict F1 | 71.40 | 71.79 | 73.07 | 73.40 | | | SciERC | Scientific | Relation Strict F1 | - | - | 33.36 | 38.00 | | | ADE | Medicine | Relation Strict F1 | 80.60 | 83.81 | - | - | | | Relation Extraction | ACE05-Evt | News, Speech | Event Trigger F1 | 68.40 | 70.86 | 72.63 | 74.08 | | Event | Event Argument F1 | 47.60 | 50.67 | 54.67 | 53.92 | | | | Extraction | CASIE | Cybersecurity | Event Trigger F1 | - | - | 68.98 | 71.46 | | Event Argument F1 | - | - | 60.37 | 62.91 | | | | | 14-res | Review | Sentiment Triplet F1 | - | - | 73.78 | 74.77 | | | 14-lap | Review | Sentiment Triplet F1 | - | - | 63.15 | 65.23 | | | 15-res | Review | Sentiment Triplet F1 | - | - | 66.10 | 68.58 | | | 16-res | Review | Sentiment Triplet F1 | - | - | 73.87 | 76.02 | | | Sentiment Extraction | | | | | | | | Table 2: F1 scores with standard deviations on FewNERD. † denotes the results reported from Ding et al. (2021). | Intra | Inter | | | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------|----------|-----------|--------|-------|--------| | Models | 1∼2-shot | 5∼10-shot | 1∼2-shot | 5∼10-shot | | | | | 5 way | 10 way | 5 way | 10 way | 5 way | 10 way | 5 way | 10 way | | ProtoBERT† 23.45±0.92 19.76±0.59 41.93±0.55 34.61±0.59 44.44±0.11 39.09±0.87 58.80±1.42 53.97±0.38 NNShot† 31.01±1.21 21.88±0.23 35.74±2.36 27.67±1.06 54.29±0.40 46.98±1.96 50.56±3.33 50.00±0.36 ESD 41.44±1.16 32.29±1.10 50.68±0.94 42.92±0.75 66.46±0.49 59.95±0.69 74.14±0.80 67.91±1.41 DecomMeta 52.04±0.44 43.50±0.59 63.23±0.45 56.84±0.14 68.77±0.24 63.26±0.40 71.62±0.16 68.32±0.10 UniEX 53.92±0.39 45.67±0.53 63.26±0.14 56.65±0.27 69.37±0.19 64.53±0.05 73.79±0.32 69.63±0.45 | | | | | | | | comMeta (Ma et al., 2022b), ESD (Wang et al., 2022), and methods from (Ding et al., 2021), e.g., ProtoBERT, NNShot. For Cross-Dataset, we compare the UniEX to DecomMeta (Ma et al., 2022b) and baselines reported by (Hou et al., 2020), e.g., TransferBERT, Matching Network, ProtoBERT and L-TapNet+CDT. Table 2 and 3 illustrates the main results on FewNERD and Cross-Dataset of our approach alongside those reported by previous methods. It can be seen that UniEX achieves the best performance under different type granularity and domain divisions, and outperforms the prior methods with a large margin. Compare with DecomMeta on CrossDataset, UniEX achieves a performance improvement up to 6.94% and 5.63% F1 scores on average in 1-shot and 5-shot, which demonstrates the effectiveness of our approach in learning general IE knowledge. It indicates that even without pretraining on large-scale corpus, our approach can still sufficiently excavate the semantic information related with objective entities from label names, which enhances the understanding of task-specific information when data is extremely scarce. Secondly, we compare UniEX with the latest baselines TableSequence (Wang and Lu, 2020) and RelationPrompt (Chia et al., 2022) on zero-shot relation triplet extraction task for Wiki-ZSL and Few-Rel datasets in Table 4. In both single-triplet and multi-triplet evaluation, UniEX consistently outperforms the baseline models in terms of Accuracy and overall F1 score respectively, which demonstrates the ability of our approach to handle unseen labels. Although we observe a lack of advantage in recall score for multi-triplet evaluation, the significant improvement in precision allowed our approach to achieve a balanced precision-recall ratio. The reason for such difference is probably | 1-shot | 5-shot | | | | | | | | |------------------------------|-----------------------------------------------------------------------------------------|----------------------------------|------------|-----------|------------|-----------|-----------------------|-------| | Models | News | Wiki | Social | Mixed | News | Wiki | Social | Mixed | | TransferBERT‡ | 4.75±1.42 | 0.57±0.32 | 2.71±0.72 | 3.46±0.54 | 15.36±2.81 | 3.62±0.57 | 11.08±0.57 35.49±7.60 | | | Matching Network‡ 19.50±0.35 | 4.73±0.16 | 17.23±2.75 15.06±1.61 19.85±0.74 | 5.58±0.23 | 6.61±1.75 | 8.08±0.47 | | | | | ProtoBERT‡ | 32.49±2.01 | 3.89±0.24 | 10.68±1.40 | 6.67±0.46 | 50.06±1.57 | 9.54±0.44 | 17.26±2.65 13.59±1.61 | | | L-TapNet+CDT‡ | 44.30±3.15 12.04±0.65 20.80±1.06 15.17±1.25 45.35±2.67 11.65±2.34 23.30±2.80 20.95±2.81 | | | | | | | | | DecomMeta | 46.09±0.44 17.54±0.98 25.14±0.24 34.13±0.92 58.18±0.87 31.36±0.91 31.02±1.28 45.55±0.90 | | | | | | | | | UniEX | 58.51±0.14 18.20±0.45 34.67±0.25 39.28±0.55 66.08±0.42 29.68±0.32 38.64±1.29 54.25±0.35 | | | | | | | | Table 3: F1 scores with standard deviations on Cross-Dataset. ‡ denotes the results reported from Hou et al. (2020). | Dataset | Model | Single-Triplet | Multi-Triplet | | |-------------------------|----------------|-------------------|-------------------|------| | Acc. | P. | R. | F1 | | | TableSequence | 14.47 | 43.68 | 3.51 | 6.29 | | Wiki-ZSL RelationPrompt | 16.64 | 29.11 31.00 30.01 | | | | UniEX | 26.84 | 58.22 25.85 34.94 | | | | TableSequence | 11.82 | 15.23 | 1.91 | 3.40 | | FewRel | RelationPrompt | 22.27 | 20.80 24.32 22.34 | | | UniEX | 27.30 | 44.46 15.72 23.13 | | | | Dataset | CoNLL03 CoNLL04 | CASIE | 16-res | | | |-----------|-------------------|---------|-----------------------|-------|-------| | F1 | Ent | Rel-S | Evt-Tri Evt-Arg Rel-S | | | | W/O SAM | 28.47 | 0 | 4.03 | 0 | 0 | | W/O TriA | 58.58 | 49.40 | 6.97 | 1.51 | 29.77 | | W/O Label | 92.59 | 70.94 | 71.18 | 62.29 | 74.64 | | UniEX | 92.65 | 73.40 | 71.46 | 62.91 | 76.02 | because the directional matching in the triaffine transformation will tend to guide the model to predict more credible targets. ## 4.4 Ablation Study In this section, we intend to verify the necessity of key components of the UniEX, including the flow controlling and triaffine transformation. Table 5 shows ablation experiment results of UniEX on four downstream tasks. W/O SAM: removing the schema-based attention mask matrix that controls the flowing of labels. We find that model performance is almost zero on many Table 6: The efficiency comparison of UIE and UniEX with batch_size=1. (×k) is the relative inference-speed. tasks, which demonstrates the importance of eliminating intra-information of labels. AMM makes the labels unreachable to each other, effectively avoiding the mutual interference of label semantics. W/O TriA: replacing the triaffine transformation with the multi-head selection network, which multiplies the schema and the head-to-tail span of the text respectively, and then replicates and adds them to get the scoring matrix. The significant performance decline demonstrates the important role of triaffine attention mechanism in establishing dense correspondence between schemas and text spans. W/O Label: replacing the label names with the special token [unused n], which eliminates label semantics while allowing the model to still distinguish between different labels. We find a slight degradation of model performance in small datasets CoNLL03 and 16-res, indicating that the prior knowledge provided by label names can effectively compensate for the deficiency of training data. As the correspondence between schema and extraction targets is not affected, model performance in large datasets tends to stabilize. ## 4.5 Efficiency Analysis To verify the computation efficiency of our approach on universal IE, we compare inferencespeed with UIE (Lu et al., 2022) on the four standard datasets mentioned in section 4.4. As shown in Table 6, we can find that since generating the target structure is a token-wise process, the inferencespeed of UIE is slow and limited by the length of the target structure. On the contrary, UniEX can | Model | CoNLL03 | CoNLL04 | CASIE | 16-res | |----------|------------|-------------|-------------|-------------| | (sent/s) | (sent/s) | (sent/s) | (sent/s) | | | UIE | 2.1(×1.0) | 1.0(×1.0) | 1.1(×1.0) | 1.4(×1.0) | | UniEX | 16.5(×7.9) | 16.6(×16.6) | 14.9(×13.5) | 19.7(×14.1) | decode all the target structures at once from the scoring matrices obtained by triaffine transformation, with an average speedup ratio of 13.3 to UIE. ## 5 Conclusion In this paper, we introduce a new paradigm for universal IE by converting all IE tasks into joint span detection, classification and association problems with a unified extractive framework. UniEX collaboratively learns the generalized knowledge from schema-based prompts and controls the correspondence between schema and extraction targets via the triaffine attention mechanism. Experiments on both supervised setting and low-resource scenarios verify the transferability and effectiveness of our approaches. ## Limitations In this paper, our main contribution is an effective and efficient framework for universal IE. We aim to introduce a new unified IE paradigm with extractive structures and triaffine attention mechanism, which can achieve better performance in a variety of tasks and scenarios with more efficient inferencespeed. However, it is non-trivial to decide whether a sophisticated and artificial prompt is required for complex datasets and large label sets. In addition, we only compare with limited baselines with specific datasets configurations when analyzing the performance of the UniEX in supervised, few-shot and zero-shot settings. In experiments, we implement only a few comparative experiments between BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) due to the limit of computational resources. ## Ethical Considerations As an important domain of natural language processing, information extraction is a common technology in our society. It is necessary to discuss the ethical influence when using the extraction models (Leidner and Plachouras, 2017). In this work, We develop a new universal IE framework, which enhances the generalization ability in various scenarios. As discussed (Schramowski et al., 2019, 2022; Blodgett et al., 2020), pre-trained LMs might contain human-made biases, which might be embedded in both the parameters and outputs of the open-source models. In addition, we note the potential abuse of universal IE models, as these models achieve excellent performance in various domains and settings after adapting to pre-training on largescale IE datasets, which allows the models to be integrated into applications often without justification. We encourage open debating on its utilization, such as the task selection and the deployment, hoping to reduce the chance of any misconduct. ## References Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 5454–5476. Chih-Yao Chen and Cheng-Te Li. 2021. Zs-bert: Towards zero-shot relation extraction with attribute representation learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3470–3479. Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022. Relationprompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 45–57. Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the wnut2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140–147. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-nerd: A few-shot named entity recognition dataset. In *Proceedings of the 59th Annual Meeting of the* Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3198–3213. George R Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ace) program– tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04). Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. Advances in Neural Information Processing Systems, 32. Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In *Proceedings of the 2009* conference on empirical methods in natural language processing, pages 141–150. Ralph Grishman. 2019. Twenty-five years of information extraction. *Natural Language Engineering*, 25(6):677– 692. Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In *Proceedings of COLING* 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537–2547. Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. *Journal of Biomedical Informatics*, 45(5):885–892. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A largescale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803–4809. Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, and Ting Liu. 2020. Few-shot slot tagging with collapsed dependency transfer and label-enhanced task-adaptive projection network. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 1381–1393. Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question answering, text classification, and regression via span extraction. arXiv preprint arXiv:1904.09286. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In *Proceedings* of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In *International Conference on Learning* Representations. Jochen L Leidner and Vassilis Plachouras. 2017. Ethical by design: Ethics best practices for natural language processing. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 30–40. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified mrc framework for named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5849–5859. Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 7999– 8009. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In *Proceedings of the 57th Annual* Meeting of the Association for Computational Linguistics, pages 4487–4496. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint* arXiv:1907.11692. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5755–5772. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 3219–3232. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046. Ying Luo, Fengshun Xiao, and Hai Zhao. 2020. Hierarchical contextualized representation for named entity recognition. In *AAAI*, pages 8441–8448. AAAI Press. Jie Ma, Miguel Ballesteros, Srikanth Doss, Rishita Anubhai, Sunil Mallya, Yaser Al-Onaizan, and Dan Roth. 2022a. Label semantics for few shot named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1956–1971. Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022b. Decomposed meta-learning for few-shot named entity recognition. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1584–1596. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. *arXiv preprint* arXiv:1806.08730. Aaron Mueller, Jason Krone, Salvatore Romeo, Saab Mansour, Elman Mansimov, Yi Zhang, and Dan Roth. 2022. Label semantic aware pre-training for few-shot text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8318–8334. Ion Muslea et al. 1999. Extraction patterns for information extraction tasks: A survey. In *The AAAI-99 workshop on* machine learning for information extraction, volume 2. Orlando Florida. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: an annotated research abstract corpus in molecular biology domain. In Proceedings of the second international conference on Human Language Technology Research, pages 82–86. Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2020. Structured prediction as translation between augmented natural languages. In *International Conference on* Learning Representations. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In *Proceedings of the Seventeenth Conference on Computational Natural Language* Learning, pages 143–152. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020a. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020b. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1–67. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In *Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part* III, pages 148–163. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 1–8. Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In *Proceedings of the Seventh* Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations. Taneeya Satyapanich, Francis Ferraro, and Tim Finin. 2020. CASIE: extracting cybersecurity event information from text. In *AAAI*, pages 8749–8757. AAAI Press. Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A Rothkopf, and Kristian Kersting. 2022. Large pre-trained language models contain human-like biases of what is right and wrong to do. *Nature Machine Intelligence*, 4(3):258–268. Patrick Schramowski, Cigdem Turan, Sophie Jentzsch, Constantin Rothkopf, and Kristian Kersting. 2019. Bert has a moral compass: Improvements of ethical and moral values of machines. *arXiv preprint arXiv:1912.05238*. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. *Linguistic Data Consortium, Philadelphia*, 57:45. Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with table-sequence encoders. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1706–1721. Peiyi Wang, Runxin Xu, Tianyu Liu, Qingyu Zhou, Yunbo Cao, Baobao Chang, and Zhifang Sui. 2022. An enhanced span-based decomposition method for few-shot sequence labeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5012–5024. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. *arXiv preprint arXiv:2109.01652*. Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging framework for relational triple extraction. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 1476–1488. Lu Xu, Hao Li, Wei Lu, and Lidong Bing. 2020. Positionaware tagging for aspect sentiment triplet extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2339– 2349. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416–2429. Ping Yang, Junjie Wang, Ruyi Gan, Xinyu Zhu, Lin Zhang, Ziwei Wu, Xinyu Gao, Jiaxing Zhang, and Tetsuya Sakai. 2022. Zero-shot learners for natural language understanding via a unified multiple choice perspective. *arXiv* preprint arXiv:2210.08590. Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. 2022. Packed levitated marker for entity and relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4904–4917. Bowen Yu, Zhenyu Zhang, Xiaobo Shu, Tingwen Liu, Yubin Wang, Bin Wang, and Sujian Li. 2020a. Joint extraction of entities and relations based on a novel decomposition strategy. In *ECAI*. Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470–6476. Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3174–3186. Amir Zeldes. 2017. The gum corpus: creating multilayer resources in the classroom. *Language Resources and Evaluation*, 51(3):581–612. ## A Experiment Details This section describes the details of experiments, including the dataset descriptions, unified EX input formats, metrics and training implementation. ## A.1 Details Of Downstream Tasks A.1.1 Supervised Setting For the supervised setting, We conduct downstream tasks on 4 IE tasks, 14 datasets, and the detailed statistic of each dataset is shown in Table 7. Entity We conduct entity extraction experiments on four datasets, including the flat entity dataset extraction dataset CONLL03 (Sang and De Meulder, 2003), and nested entity extractions datasets ACE04 (Doddington et al., 2004), ACE05- Ent (Walker et al., 2006) and GENIA (Ohta et al., 2002). For the CONLL03, ACE04 and ACE05- Ent, We use the same processing and splits as (Li et al., 2020). For the GENIA, we follow the preprocessing steps and data split as (Finkel and Manning, 2009). Relation We conduct relation extraction experiments on five joint entity-relation extraction datasets across several languages and domains, including CONLL04 (Roth and Yih, 2004), ACE05- Rel (Walker et al., 2006), NYT (Riedel et al., 2010), SciERC (Luan et al., 2018) and ADE (Gurulingappa et al., 2012). We follow the pre-processing versions and data split of previous works (Gupta et al., 2016; Yu et al., 2020a; Luan et al., 2019). Event For ACE05-Evt, we follow the same types, data splits, and pre-processing steps as (Lin et al., 2020). For CASIE (Satyapanich et al., 2020), we remove three incomplete annotated documents, then split the remaining documents into three sets as (Lu et al., 2022). Sentiment We conduct sentiment extraction experiments on the sentiment triplet extraction (Xu et al., 2020) of SemEval 14/15/16 aspect sentiment analysis datasets. We employ the pre-processing datasets of the previous work (Yan et al., 2021). ## A.1.2 Few-Shot Setting For the few-shot setting, we conduct downstream tasks on 2 few-shot named entity recognition datasets: Few-NERD (Ding et al., 2021). It is annotated with a hierarchy of 8 coarse-grained and |Ent| |Rel| |Evt| #Train #Val #Test | ACE04 | 7 | - | - | 6,202 | 745 | 812 | |-----------|-----|-----|-----|---------|-------|-------| | ACE05-Ent | 7 | - | - | 7,299 | 971 | 1,060 | | CoNLL03 | 4 | - | - | 14,041 | 3,250 | 3,453 | | GENIA | 5 | - | - | 14,824 | 1,855 | 1,854 | | ACE05-Rel | 7 | 6 | - | 10,051 | 2,420 | 2,050 | | CoNLL04 | 4 | 5 | - | 922 | 231 | 288 | | NYT | 3 | 24 | - | 56,196 | 5,000 | 5,000 | | SciERC | 6 | 7 | - | 1,861 | 275 | 551 | | ADE | 2 | 1 | - | 3,417 | 427 | 428 | | ACE05-Evt | - | - | 33 | 19,216 | 901 | 676 | | CASIE | 21 | - | 5 | 11,189 | 1,778 | 3,208 | | 14res | 2 | 3 | - | 1,266 | 310 | 492 | | 14lap | 2 | 3 | - | 906 | 219 | 328 | | 15res | 2 | 3 | - | 605 | 148 | 322 | | 16res | 2 | 3 | - | 857 | 210 | 326 | 66 finegrained entity types. Two tasks are considered on this dataset: i) Intra, where all entities in train/dev/test splits belong to different coarsegrained types. ii) Inter, where train/dev/test splits may share coarse-grained types while keeping the fine-grained entity types mutually disjoint. Cross-Dataset (Hou et al., 2020). Four datasets focusing on four domains are used here: CoNLL2003 (Sang and De Meulder, 2003) (news), GUM (Zeldes, 2017) (Wiki) , WNUT-2017 (Derczynski et al., 2017) (social), and Ontonotes (Pradhan et al., 2013) (mixed). Among them, we take two domains for training, one for validation, and the remaining for test. ## A.1.3 Zero-Shot Setting For the zero-shot setting, we conduct downstream tasks on 2 zero-shot named entity recognition datasets: FewRel (Han et al., 2018) is hand-annotated for few-shot relation extraction, we further made it suitable for the zero-shot setting after data splitting into disjoint relation label sets for training, validation and testing as (Chia et al., 2022). Wiki-ZSL (Chen and Li, 2021) is constructed through distant supervision over Wikipedia articles and the Wikidata knowledge base. To partition the data into seen and unseen label sets, we follow the same process as (Chia et al., 2022) to be consistent. For each dataset, a fixed | Backbone | RoBERTa-large/RoBERTa-base | BERT-base | | | | | | | |---------------|------------------------------|-------------|------------|------------|---------------|------------|-------------|-------------| | Task | Entity | Relation | Event | Sentiment | Cross Dataset | Wiki-ZSL | FewRel | | | Phase | finetuning | finetuning | finetuning | finetuning | pretraining | finetuning | pretraining | pretraining | | Learning Rate | 2E-5 | 2E-5 | 2E-5 | 2E-5 | 2E-5 | 2E-5 | 2E-05 | 2E-5 | | Batch Size | 32 | 32 | 32 | 32 | 32 | 2 | 32 | 32 | | Schedule | linear | linear | linear | linear | linear | linear | linear | linear | | Warmup Rate | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | 0.06 | | Epoch | 200 | 400 | 200 | 200 | 100 | 100 | 4 | 4 | Table 8: Hyper-parameters for UniEX-base and UniEX-large on different tasks and datasets. | Hyper-parameter | UniEX-base | UniEX-large | Task | Dataset | UIE | UniEX | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|---------------|--------|-----------|-------|---------| | 770M | 372M | | | | | | | Backbone | Roberta-large | Roberta-base | | | | | | Layers of Encoder | 12 | 24 | | | | | | Hidden Dimension | 768 | 1,024 | | | | | | FF hidden size | 3072 | 4096 | | | | | | Layer Normalize ϵ | 1e-5 | 1e-5 | | | | | | Attention head | 12 | 16 | ACE04 | 1.23 | 18.29 | | | Entity | ACE05-Ent | 1.62 | 18.16 | | | | | Extraction | CoNLL03 | 2.06 | 16.45 | | | | | ACE05-Rel | 1.64 | 18.69 | | | | | | Relation | CoNLL04 | 1.00 | 16.60 | | | | | Extraction | SciERC | 1.02 | 17.09 | | | | | Event | ACE05-Evt | 1.55 | 12.93 | | | | | Extraction | CASIE | 1.55 | 12.93 | | | | | Table 9: Model architectures. | | | | | | | | number of labels are randomly selected as unseen labels while the remaining labels are treated as seen labels during training. The unseen label set size is set to m=5 in our experiments. In order to reduce the effect of experimental noise, the label selection | 14-res | 1.45 | 18.60 | | | | | 14-lap | 1.49 | 19.78 | | | | | | 15-res | 1.41 | 18.37 | | | | | | 16-res | 1.38 | 19.71 | | | | | | Sentiment Extraction | | | | | | | number of labels are randomly selected as unseen labels while the remaining labels are treated as seen labels during training. The unseen label set size is set to m=5 in our experiments. In order to reduce the effect of experimental noise, the label selection process is repeated for five different random seeds to produce different data folds. For each data fold, the test set consists of the sentences containing unseen labels. Five validation labels from the seen labels are used to select sentences for early stopping and hyperparameter tuning. The remaining sentences are treated as the train set. Hence, the zero-shot setting ensures that train, validation and test sentences belong to disjoint label sets. ## A.2 Evaluation Metric We use span-based offset Micro-F1 as the primary metric to evaluate the model as (Lu et al., 2022) - **Entity**: an entity mention is correct if its offsets and type match a reference entity. - **Relation Strict**: relation with strict match, a relation is correct if its relation type is correct and the offsets and entity types of the related entity mentions are correct. - **Relation Triplet**: relation with boundary match, a relation is correct if its relation type is correct and the string of the subject/object are correct. - **Event Trigger**: an event trigger is correct if its offsets and event type matches a reference trigger. - **Event Argument**: an event argument is correct if its offsets, role type, and event type match a reference argument mention. - **Sentiment Triplet**: a correct triplet requires the offsets boundary of the target, the offsets boundary of the opinion span, and the target sentiment polarity to be all correct at the same time. ## A.3 Training Implementation To make a fair comparison, we first initialize UniEX-base and UniEX-large with RoBERTa-base and RoBERTa-large checkpoints (Liu et al., 2019b) for the supervised setting, and use the BERT-base checkpoint (Devlin et al., 2019) as the backbone for the few-shot and zero-shot settings. The model architectures are shown in Table 9. We employ Adam optimizer (Kingma and Ba, 2015) as the optimizer with 1e-8 weight decay. Table 8 shows the detailed hyper-parameters for downstream tasks. We truncate the concatenated overall length of schemabased prompt s and raw text x to 512 during training. ## A.4 Unified Input Inspired by template examples in UIE (Lu et al., 2022), we design a simple rule to transform the original text to a unified EX format. In addition, we present four examples for different tasks: An example of CONLL03 (Entity Extraction): Raw text: {x: "Arafat goes to Nablus ahead of cabinet meeting .", *entity type*: [Location, Organization, Person, Miscellaneous], *extraction target*: [(Arafat, 1, 1, Person), (Nablus, 4, 4, Location)]} Transformed Input: [CLS] Entity Extraction [R-LEP]1 Location [R-LEP]2 Organization [R-LEP]3 Person [R-LEP]4 Miscellaneous [SEP] Arafat goes to Nablus ahead of cabinet meeting . [SEP] An example of CONLL04 (Relation Extraction): Raw text: {x: "In 1752 , flagmaker Betsy Ross was born in Philadelphia .", *entity-relation type*: [(Organization, organization based in, Location), (Location, location in, Location), (Person, live in, Location), (Person, work for, Organization), (Person, kill, Person)], *extraction target*: [(Betsy Ross, 5, 6, Person), (Philadelphia, 10, 10, Location), (Betsy Ross, live in, Philadelphia)]} Transformed Input: [CLS] Relation Extraction [R-LEP]1 Location [R-LEP]2 Organization [R-LEP]3 Person [R-LEP]4 Miscellaneous [R-LEP]5 work for [R-LEP]6 organization based in [R-LEP]7location in [R-LEP]8live in [R-LEP]9 kill [SEP] In 1752 , flagmaker Betsy Ross was born in Philadelphia . [SEP] An example of ACE05-Evt (Event Extraction): Raw text: {x: "Sergeant Chuck Hagel was seriously wounded twice in Vietnam .", *event-triggerargument type*: [(Born, Trigger, Person, Place), (Injure, Trigger, Victim, Agent, Place, Instrument), (Convict, Trigger, Defendant, Adjudicator, Place), ...], *extraction target*: [(Chuck Hagel, 2, 3, Victim), (wounded, 6, 6, Trigger), (Vietnam, 9, 9, Place), (Injure, wounded, Chuck Hagel, Vietnam)]} Transformed Input: [CLS] Event Extraction [R-LEP]1 Trigger [R-LEP]2 Person [R-LEP]3 Place ... [R-LEP]i Born [R-LEP]i+1 Injure ... [R-LEP]n TriggerArgument [SEP] Sergeant Chuck Hagel was seriously wounded twice in Vietnam . [SEP] An example of 16-res (Sentiment Extraction): Raw text: {x: "I had the duck breast special on my last visit and it was incredible .", *entity-relationentity type*: [(Aspect, Positive, Opinion), (Aspect, Negative, Opinion), (Aspect, Neutral, Opinion)], extraction target: [(duck breast special, 4, 6, Aspect), (incredible, 14, 14, Opinion), (Positive, duck breast special, incredible)]} Transformed Input: [CLS] Sentiment Extraction [R-LEP]1 Aspect [R-LEP]2 Opinion [R-LEP]3 Positive [R-LEP]4 Negative [R-LEP]5 Neutral [SEP] I had the duck breast special on my last visit and it was incredible . [SEP] ## A.5 Unified Decoding As shown in Figure 5, in order to depict the training and inference processes in more detail, we show the structural tables and spotting designators of the examples in figure 4 in the entity/relation/event extraction tasks. ## A.6 Decoding Efficiency As shown in Figure 10, to explicitly compare the structural decoding efficiency of different universal IE models, we illustrate the average number of sentences generated per second by UIE and UniEX during the decoding phase. ![14_image_0.png](14_image_0.png) Entity Extraction: Arafat goes to Nablus ahead of cabinet meeting . ![14_image_1.png](14_image_1.png) ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? "Limitations" section ✓ A2. Did you discuss any potential risks of your work? "Ethical Considerations" section ✓ A3. Do the abstract and introduction summarize the paper's main claims? "Abstract" and "Introduction" sections ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Sections 4.1, 4.2, 4.3, 4.4 and 4.5 Appendix A.1, A.2, A.3 ✓ B1. Did you cite the creators of artifacts you used? Sections 4.1, 4.2, 4.3 Appendix A.1, A.2, A.3 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Sections 4.1, 4.2, 4.3 Appendix A.1, A.2, A.3 ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Sections 4.1, 4.2, 4.3 Appendix A.1, A.2, A.3 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Sections 4.1, 4.2, 4.3 Appendix A.1, A.2, A.3 ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Sections 4.1, 4.2, 4.3 Appendix A.1, A.2, A.3 ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Table 7, 8, 9 ## C ✓ **Did You Run Computational Experiments?** Sections 4.2, 4.3, 4.4 And 4.5 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix A.1, A.2, A.3 The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Appendix A.1, A.2, A.3 ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Sections 4.1, 4.2, 4.3, 4.4 and 4.5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Sections 4.1, 4.2, 4.3 D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank. D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
stodden-etal-2023-deplain
{DE}plain: A {G}erman Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification
https://aclanthology.org/2023.acl-long.908
Text simplification is an intralingual translation task in which documents, or sentences of a complex source text are simplified for a target audience. The success of automatic text simplification systems is highly dependent on the quality of parallel data used for training and evaluation. To advance sentence simplification and document simplification in German, this paper presents DEplain, a new dataset of parallel, professionally written and manually aligned simplifications in plain German {``}plain DE{''} or in German: {``}Einfache Sprache{''}. DEplain consists of a news-domain (approx. 500 document pairs, approx. 13k sentence pairs) and a web-domain corpus (approx. 150 aligned documents, approx. 2k aligned sentence pairs). In addition, we are building a web harvester and experimenting with automatic alignment methods to facilitate the integration of non-aligned and to be-published parallel documents. Using this approach, we are dynamically increasing the web-domain corpus, so it is currently extended to approx. 750 document pairs and approx. 3.5k aligned sentence pairs. We show that using DEplain to train a transformer-based seq2seq text simplification model can achieve promising results. We make available the corpus, the adapted alignment methods for German, the web harvester and the trained models here: \url{https://github.com/rstodden/DEPlain}.
# Deplain**: A German Parallel Corpus With Intralingual Translations Into** Plain Language For Sentence And Document Simplification Regina Stodden, **Omar Momen** and **Laura Kallmeyer** Heinrich Heine University Düsseldorf, Germany {firstname.secondname}@hhu.de ## Abstract Text simplification is an intralingual translation task in which documents, or sentences of a complex source text are simplified for a target audience. The success of automatic text simplification systems is highly dependent on the quality of parallel data used for training and evaluation. To advance sentence simplification and document simplification in German, this paper presents DEPLAIN, a new dataset of parallel, professionally written and manually aligned simplifications in plain German (*"plain DE" or in German: 'Einfache* Sprache"). DEPLAIN consists of a newsdomain (approx. 500 document pairs, approx. 13k sentence pairs) and a web-domain corpus (approx. 150 aligned documents, approx. 2k aligned sentence pairs). In addition, we are building a web harvester and experimenting with automatic alignment methods to facilitate the integration of non-aligned and to bepublished parallel documents. Using this approach, we are dynamically increasing the webdomain corpus, so it is currently extended to approx. 750 document pairs and approx. 3.5k aligned sentence pairs. We show that using DEPLAIN to train a transformer-based seq2seq text simplification model can achieve promising results. We make available the corpus, the adapted alignment methods for German, the web harvester and the trained models here: https://github.com/rstodden/DEPlain. ## 1 Introduction Automatic text simplification (TS) is the process of automatically generating a simpler version of complex texts while preserving the main information (Alva-Manchego et al., 2020b). Current TS research mostly focuses on English and on sentencelevel simplification. This paper contributes to TS research on German. Compared to other European languages, German is more difficult to read due to complex sentence structures and many compound words (Marzari, 2010). According to Buddeberg and Grotlüschen (2020), roughly 6.2 mio. adults in Germany (approx. 12.1%) have reading and writing problems on the character-level (0.6%), word-level (approx. 3.4%) or sentence-level (approx. 8.1%). To counteract and make texts accessible to more people, currently two dominant German variants for simplified language exist (Maaß, 2020): 1. *easy-to-read German (de: "Leichte* Sprache"): following strict rules the complexity of the language is maximally reduced (almost corresponds to CEFR level A1). The main target group is people with cognitive or learning disabilities or communication impairments. 2. *plain German (de: "Einfache Sprache")*: reduced complexity with a mild to a strong extent (almost corresponds to CEFR levels A2 and B1), which can be compared to texts for non-experts. The main target group is people with reading problems and non-native German speakers. This is also reflected in a rise in research and application of manual and automatic German text simplification: i) Many German web pages are provided in standard German as well as in plain or easy-to-read German, e.g., Apotheken Umschau1 or the German Federal Agency for Food2, ii) News agencies are publishing their news in plain or easy– to-read German, e.g., Austrian Press Agency3 or Deutschlandfunk4. Klaper et al. (2013) were the first who made use of these resources for supervised, automatic German TS. They created a small parallel corpus of approx. 250 web pages with intralingual translations from standard German to easy-to-read German. However, due to copyright issues, they (and also its extension by Battisti et al. (2020)) could not make their corpus publicly available. To avoid such problems, Hewett and Stede (2021); Aumiller and Gertz (2022) built TS corpora based on open accessible Wikipedia texts simplified for children. Due to the high cost of manual sentence-wise alignment or not applicable automatic alignment methods (Aumiller and Gertz, 2022), these corpora are only aligned on the document level. Furthermore, Spring et al. (2022) report results on experiments with some existing automatic alignment methods and show non-satisfying, error-prone results. In this work, we tackle some of the named problems by proposing, DEPLAIN, a new parallel German corpus for text simplification with manual and automatic alignments on the document and sentence level. DEPLAIN contains intralingual translations mostly into plain German and includes "strong" as well as "mild" simplifications. Overall, we propose 4 subcorpora with in total 1,239 document pairs, 14,968 manual sentencewise alignments and 1,594 automatic sentence-wise alignments. One subcorpus is built from professionally simplified news articles in plain language of the Austrian Press Agency5. The resources of the other 3 subcorpora are compiled by a new web harvester, making use of publicly available parallel documents. We analyze these subcorpora based on human ratings and annotations to get more insights into the quality and the simplification processes within the data. We further show two use cases of our new TS corpus: i) evaluating automatic alignment methods, and ii) exemplifying TS training and evaluation with DEPLAIN. Our data, web harvester, code for alignment methods and models are publicly available (with some restrictions). ## 2 Related Works Text Simplification (TS) is an NLG task in which mostly machine learning models learn from complex-simple pairs how to simplify texts for a specific target group. For a lot of languages, parallel TS corpora exist either on sentence-level or document-level.6 Only a few corpora contain data on both levels, e.g., EW-SEW v2.0 (Kauchak, 2013), Newsela 2015 (Xu et al., 2015), or WikiAuto (Jiang et al., 2020). Newsela (Xu et al., 2015), furthermore, includes for each source text several simplified versions targeted to different audiences. Therefore, it contains "strong" simplifications (highest to lowest complexity level) and also "mild" simplifications (intermediate complexity levels) (Štajner et al., 2017). In this paper, we also introduce one corpus with rather "mild" simplifications (DEPLAIN-APA) and one with rather "strong" simplifications (DEPLAIN-WEB) which allows more analysis of the capabilities of TS models. Like most other existing TS corpora (see Trienes and Vásquez-Rodríguez 2023), our corpus contains only one golden simplification (reference) per simplification pair. German corpora were also proposed in recent years, both on either document level (e.g., Lexikacorpus Hewett and Stede 2021, or 20Minuten Rios et al. 2021) or sentence-level (e.g., web-corpus, APA-LHA, capito-Corpus Ebling et al. 2022, or Simple-German-Corpus Toborek et al. 2022), but not focussing on both levels7. Unfortunately, many of the datasets cannot be used for training TS models or only with caution because, for example, i) they are too small for training (e.g., Klaper et al. (2013)), ii) they are automatically aligned with questionable quality (e.g., Spring et al. 2021), iii) are only available for evaluation (e.g., Mallinson et al. 2020; Naderi et al. 2019), iv) they are not truly parallel as the complex and simple versions are written independently (e.g., Aumiller and Gertz 2022), or v) are not available (e.g., Ebling et al. 2022) sometimes due to copyright issues (e.g., Battisti et al. 2020). DEPLAIN tackles all of the mentioned problems, i.e., size, alignment quality, simplification quality, and availability. Alignment Methods and web scraping are already used to overcome some of these issues. For example, similar to our work, Toborek et al. (2022) present a web scraper to scrape parallel documents from the web and automatically align them. However, (Spring et al., 2022) have shown that automatic sentence alignment is still an open challenge for German by comparing some existing alignment methods. In this work, we will evaluate the alignment methods on our manually aligned data and will adapt them for our purpose, e.g., use German resources and incorporate n:m alignments. | Name | License | # Doc. Pairs | # Original Sents | # Simple Sents. | Alignment | # Sent. Pairs | |-------------|--------------|----------------|--------------------|-------------------|-------------|-----------------| | DEPLAIN-APA | upon request | 483 | 25,607 | 26,471 | manual | 13,122 | | open | 147 | 6,138 | 6,402 | manual | 1,846 | | | open | 249 | 7,087 | 7,760 | auto | 652 | | | DEplain-web | closed | 360 | 12,847 | 18,068 | auto | 942 | | In total | mixed | 1,239 | 51,681 | 58,701 | mixed | 16,562 | ## 3 Document-Level Ts Corpora We present two new TS corpora on the document level, DEPLAIN-APA and DEPLAIN-WEB, containing parallel documents in standard German and plain German. Table 1 provides statistics of both corpora. ## 3.1 Deplain-Apa The Austrian Press Agency (APA) publishes everyday news in standard German and parallel, professionally simplified versions for German language learners of CEFR level B1 and A2 (both equivalent to plain language): DEPLAIN-APA contains news text of APA of CEFR level A2 and B1 which were published between May 2019 and April 2021. Data from the same source was already used for experiments with TS (see for an overview Ebling et al. (2022)) and made available as APA-LHA (Spring et al., 2021). However, the APA-LHA alignments have some issues that are problematic for training a TS system: The alignment format is unclear in the sense of not distinguishing between 1:1, 1:m, and n:1 sentence alignments. Furthermore, the documents were aligned automatically, which results in many misaligned sentence-level alignments. Some examples of these problems are presented in Appendix B. We tackle these problems by making use of the provided manual document alignments of APA from Common European Framework of Reference for Languages (CEFR) level B1 to A2. As the document alignments are not available for all APA documents, our corpus is reduced to 483 parallel documents. In a further comparison, DEPLAIN-APA focuses more on mild simplifications which might be easier to learn for a document TS system than strong simplifications as in APA-LHA (C2 to B1 and C2 to A2).8 Overall, DEPLAIN-APA contains 483 document pairs (see Appendix A and Table 6c). On average, 8Examples for strong and mild simplifications of DEPLAIN can be found in Appendix C. the complex documents (CEFR-level B1) have a German Flesch-Reading-Ease score (FRE) (Flesch, 1948; Amstad, 1978) of 61.05 ± 4.67 and simple documents of 66.48 ± 4.56, which can be both interpreted as *simple* (following Amstad (1978, p. 117)).9 ## 3.2 Deplain-Web The second document-level corpus of DEplain, i.e., DEPLAIN-WEB, is a dynamic corpus with parallel documents in standard German and plain German from the web. Similar to Battisti et al. (2020) and Toborek et al. (2022), we have built an open-source web harvester in Python to download, align and extract text of parallel documents of given web pages (including paragraphs). For reproducibility, we made the code and the list of web pages available. However, the automatic extraction of the web data is not perfect as some recent changes in the HTML structure are not recognized by the crawler, and some layouts such as tables or lists might not be extracted correctly. Following this, the data might include some low-quality data. DEPLAIN-WEB currently contains 756 parallel documents crawled from 11 web pages and covering 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public authority texts. The first three domains are not included in any other German TS corpus. All simplified documents are professionally simplified by trained translators and often reviewed by the target group. The simplified documents of 5 of the 11 web pages are written in plain German, 6 in easy-to-read German. All complex documents are in standard German, except *Alumniportal* Deutschland, which contains data on CEFR level B2. Some of the fictional complex documents are only available in their original language from the 19th century and are therefore more difficult to 9We calculated the German variant of FRE with the Python package textstat. For criticism on traditional readability scores for TS see, e.g., Tanprasert and Kauchak (2021). read. More details on the scraped web pages are given in Appendix E. We plan further extensions of DEPLAIN-WEB, e.g., by a political lexicon in plain German10. The corpus is dynamic for three reasons: i) it can be extended with new web pages, ii) the number of parallel documents of a web page can change, and iii) the content of the considered web pages can change over time. More details on the web crawler, reasoning for choosing the current web pages, and the document alignment process can be found in Appendix E. On the one hand, some of these web documents are openly licensed and some data providers allowed us to use and share the data for academic purposes. Therefore, we can publicly share this data; this corpus contains 396 document pairs which are represented in the second and third rows in Table 1. On the other hand, we additionally provide the web crawler to download and use the parallel documents with restricted licenses (360 documents) which is represented in the last row in Table 1. ## 4 Sentence-Level Ts Corpora We aligned both corpora, DEPLAIN-APA and DEPLAIN-WEB, also on the sentence level. All 483 available parallel documents of DEPLAIN-APA and 147 documents of DEPLAIN-WEB are manually aligned on the sentence level with the assistance of a TS annotation tool. Overall, 14,968 sentence pairs of 630 document pairs are manually aligned. We first describe the annotation procedure (see subsection 4.1) and the resulting statistics per subcorpus (subsection 4.2 and subsection 4.3). ## 4.1 Annotation Procedure DEPLAIN-APA and DEPLAIN-WEB are both annotated following the same procedure. The sentence pairs are manually aligned by two German native speakers11 using the TS annotation tool TSanno (Stodden and Kallmeyer, 2022) which assist, for example, in splitting the documents into sentences, alignment of n:m sentence pairs, automatic alignment of identical sentence pairs, and the annotation of simplification operations and manual evaluation. The annotators were instructed by the principal investigator and were also provided with instructions on how to use the annotation tool and with an annotation guideline.12 Sentence-wise Alignment The manual sentencewise alignments reflect all possible alignment types: i) 1:1 (rephrase and copy), ii) 1:m (split of a complex sentence), iii) n:1 (merge of complex sentences), iv) n:m (where n and m > 1, fusion of complex and simple sentences). Furthermore, all not annotated sentences of an annotated document are either treated as v) 1:0 (deletion of a complex sentence), and vi) 0:1 (addition of a simplified sentence). In the alignments of DEPLAIN-APA and DEPLAIN-WEB, the complex documents are fully aligned with the simplified documents. This means the alignments also reflect deletions and additions. The publication of the full document alignments, also enhance the option for example, i) to build a simplification plan for document-level simplification using sequence labeling (see Cripwell et al. 2023), ii) to include preceding and following sentences for context-aware sentences simplification (see Sun et al. 2020), or iii) to use identical pairs and additions as augmented data during training (see Palmero Aprosio et al. 2019). ## Agreement Of Alignment And Data Cleaning To compare the agreement of both annotators, we randomly sampled 99 documents over all domains which were annotated by both annotators. For calculating the inter-annotator-agreement we framed the alignment as a classification task in which a label (not aligned, partially aligned, or aligned) is assigned to each combination of complex sentences and all simple sentences per document. This format was proposed by Jiang et al. (2020) for training and evaluating a sentence-wise alignment algorithm. The inter-annotator-agreement (measured with Cohen's κ) for these documents is equal to approx. 0.85 (n=87645 sentence combinations) which corresponds to a strong level of agreement (following McHugh (2012, p. 279)). The lowest agreement is shown for the domain of health data (κ=0.52, n=13736, interpretation: weak) whereas the highest agreement is shown for the language learner data (κ=0.91, n=18493, interpretation: almost perfect).13 The health data was strongly and independently written in plain language (not sen-12We adapted the annotation schema of Stodden and Kallmeyer (2022) to our needs. 13In Appendix D, we show all inter-annotator agreements per domain. tence by sentence), including moving sentences from document beginning to ending or sentence fusion. Therefore, the manual alignment of strong simplifications seems to be less congruent than for the very mild simplifications of the language learner data which have a low edit distance. As the texts of DEPLAIN-WEB are automatically extracted from the websites, and the documents of both, DEPLAIN-APA and DEPLAIN-WEB, were automatically split into sentences, the sentence pairs can contain some sentences that are wrongly split. Therefore, we cleaned the dataset and removed too short sentences (e.g., "Anti-Semitismus.", engl.: "Antisemitism.") and too similar sentences with only one character changed (e.g., complex: "Das ist schön!", simple: "Das ist schön.", engl.: "That is nice."). Furthermore, some sentence pairs (especially term explanations in the news dataset, n=1398) are repeated several times in different documents, we decide to remove all duplicates to make sure that only unseen sentence pairs occur in the test data set. Linguistic Annotation After cleaning the data, similar to (Cardon et al., 2022), some randomly selected sentence pairs are annotated with additional linguistic annotations to get more insights into the simplification process of the aligned sentence pairs. We follow the annotation guideline provided in Stodden and Kallmeyer (2022) 14. We built a typology on linguistic-based operations, which are performed during the simplification process, following a literature review of existing typologies Bott and Saggion (2014); Brunato et al. (2015); Gonzalez-Dios et al. (2018); Koptient et al. (2019). Our typology includes 8 operations, i.e., i) delete, ii) insert, iii) merge, iv) reorder, v) split, vi) lexical substitution, vii) verbal changes, and viii) no changes of which each can be annotated on the paragraph-level, sentence-level, clause-level, or word-level. Furthermore, we also manually evaluated the sentence-wise pairs on a few aspects. As no standards for manual evaluation exist (AlvaManchego et al., 2020b), we decided to evaluate on the following three most often used criteria, i) grammaticality†, ii) meaning preservation‡, and iii) overall simplicity‡, and the following additional aspects: iv) coherence†, v) lexical simplicity‡, vi) structural simplicity‡(similar to Sulem et al. (2018b)), and vii) readability (or simplicity)†(similar to Brunato et al. (2018)). All aspects marked with ‡are rated on the sentence pair whereas all aspects with †are rated on the complex as well as the simplified part of the sentence pair. These aspects are rated on a 5-point Likert-scale, to be more clear in the meaning of the scale, the scale either range from -2 to +2 or 1 to 5 following Stodden (2021). Following Alva-Manchego et al. (2020a); Maddela et al. (2021), we provide a statement per aspect on which the annotators are asked to agree or disagree on. An overview of the statement per aspect is added to Appendix G. ## 4.2 Deplain-Apa Alignment Statistics For the sentence-level part of DEPLAIN-APA, all 483 parallel documents are manually aligned following the annotation procedure described above. Overall the subcorpus contains 13,122 manually aligned sentence pairs with 14,071 complex aligned sentences (55.82% of all complex sentences), and 16,505 simple aligned sentences (63.38% of all simple sentences) (see Table 1, and Table 6c). The largest part of the aligned sentence pairs are rephrasings, 75.54% of the pairs are 1:1 aligned (excluding identical pairs). 17.99% of the complex sentences are split into several simpler sentences (1:m alignments) and 2.91% were merged into one simple sentence. The remaining 3.57% are a fusion of several complex and several simple sentences (see Appendix F). Overall, the average sentence length has increased during simplification (complex: 12.64, simple: 13.02) which might be due to splitting long compound words into several tokens. Manual Evaluation. For manual evaluation of DEPLAIN-APA, 46 randomly sampled sentences were rated. The ratings (see Table 2) confirm that the corpus contains rather mild simplifications: the original sentences are already simple (4.39±0.77, max=5) and they are only simplified a bit (0.57±0.86). Furthermore, the original and the simplified sentences are very grammatical (complex:1.96±0.29, simple: 2.0±0.0), rather coherent (complex:3.26±1.6, simple: 3.54±1.54), and preserve the meaning (4.33±0.97). Simplification Operations. 184 sentence pairs were annotated with their transformations, for some sentence pairs more than one transformation was performed at the same time. 47.83% of the pairs are changed on the sentence level and in 84.24% a Simplicity LexSimp StructSimp MeaningP. Coherence Grammaticality **Simplicity** ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) ![5_image_3.png](5_image_3.png) ![5_image_4.png](5_image_4.png) ![5_image_5.png](5_image_5.png) sent. pair sent. pair sent. pair sent. pair complex simple complex simple **complex simple** corpus n (-2 to +2) (-2 to +2) (-2 to +2) (1 to 5) (1 to 5) (1 to 5) (-2 to +2) (-2 to +2) **(1 to 5) (1 to 5)** APA 46 0.57±0.86 0.28±0.54 0.5±0.81 4.33±0.97 3.26±1.6 3.54±1.54 1.96±0.29 2.0±0.0 4.39±0.77 4.72±0.46 WEB 384 1.04±0.82 0.67±0.75 0.95±0.87 4.29±0.93 2.82±1.48 3.08±1.4 1.72±0.79 1.96±0.26 3.48±1.18 4.46±0.69 news 46 0.57±0.86 0.28±0.54 0.5±0.81 4.33±0.97 3.26±1.6 3.54±1.54 1.96±0.29 2.0±0.0 4.39±0.77 4.72±0.46 bible 155 1.39±0.68 0.98±0.78 1.28±0.77 4.34±0.84 2.12±1.22 2.63±1.22 1.45±1.06 1.92±0.35 2.97±1.27 4.44±0.72 lang. 157 0.67±0.74 0.36±0.57 0.57±0.73 4.46±0.73 3.83±1.27 3.82±1.27 1.96±0.22 1.97±0.21 4.01±0.81 4.43±0.71 fiction 72 1.1±0.95 0.69±0.78 1.08±1.02 3.82±1.29 2.08±1.06 2.42±1.33 1.75±0.71 2.0±0.0 3.42±1.16 4.56±0.58 change was performed on the word level. On the sentence level, most often a sentence was reordered (48.86%), split (35.23%), or rephrased (12.5%). On the word level, most often a lexical substitution was performed (84.24%), a word added (46.45%) or a word deleted (35.48%). Interpretation & Summary This analysis shows that the simplifications of DEPLAIN-APA are of a high quality (grammaticality, meaning preservation, coherence) and that they contain a lot of different simplification strategies (e.g., reordering, splitting, lexical substitution). So even if they are labeled as "mild" simplifications due to their close language levels (B1 to A2), they seem to be very valuable for training a TS corpus. ## 4.3 Deplain-Web For the sentence-level part of DEPLAIN-WEB, 147 of the 456 parallel documents are manually sentence-wise aligned. The manual alignment process resulted in 1,846 sentence pairs (see Table 1 and Appendix A). Alignment Statistics. In contrast to DEPLAIN-APA, both the complex sentences (avgweb=22.59, avg*AP A*=12.64) and the simplified sentences (avgweb=19.76, avg*AP A*=13.02) are longer on average, which is due to the different complexity levels (in web, complex is comparable to CEFR level C2, and A2-B2 for simple documents). Following that, the sentence pairs of DEPLAIN-WEB are more often split (43.12%) than DEPLAIN-APA (17.99%). However, still the most often alignment type is the 1:1 alignment (46.86%). Only 4.06 % of the complex sentences are merged and 5.96% are fused.For more statistics on DEPLAIN-WEB see Table 1, Table 6c and Appendix F. Manual Evaluation. 384 randomly sampled sentence pairs are rated regarding simplification aspects (see Table 2). Overall during the simplification process, the sentences were improved in coherence (complex: 2.82±1.48, simple: 3.08±1.4), ![5_image_2.png](5_image_2.png) grammaticality (complex: 1.72±0.79, simple: 1.96±0.26) and simplicity (complex: 3.48±1.18, simple: 4.46±0.69). As presumed before, the original sentences of the bible (2.97±1.27) and the fictional literature (3.42±1.16) are more complex than the other original texts (even if not reflected in the FRE)15. However, their simplicity scores of the simple sentence (bible: 4.44±0.72, fiction: 4.56±0.58) are comparable to the scores of the other domains, therefore, these alignments can be seen as "strong" simplifications. Furthermore, they also have a higher average for structural and lexical simplifications than the other domains. Overall, DEPLAIN-WEB is comparable to DEPLAIN-APA in terms of meaning preservation (WEB: 4.29±0.93, APA: 4.33±0.97) and grammaticality (WEB simple: 1.96±0.26, APA simple: 2.0±0.0), contains stronger simplifications (WEB: 1.04±0.82, APA: 0.57±0.86) but the simplified web texts are less coherent than the simplified news (WEB simple: 3.08±1.4, APA simple: 3.54±1.54). Simplification Operations. 350 pairs were rated, 50.57% on the sentence level and 69.43% on the word level. The manual annotation corresponds to the automatic calculation of alignment types: the most often change on a sentence level is the split of the sentence (50.57%). 30.51% of the pairs are rephrased and 14.69% are reordered. Interestingly also a high percentage of verbal changes (7.91%) which include changes from passive to active or subjunctive to indicative. 15This might due to the fact that FRE is build for text level and not sentence level. The calculation seems to fail for some sentences, e.g., "Anti-Semitismus." (engl: "Antisemitism.") got a score of -172 (extremely difficult to understand) whereas "Tom!" is scored with 120.5 (extremely easy to understand). Therefore, these scores must be interpreted with caution. On the word level, again lexical substitution is performed most often (86.42%), in 24.28% at least one word is deleted, and in 11.52 % one word is added. Interpretation & Summary This analysis again shows a mix of different simplification strategies, including lexical changes as well as syntactical changes. The manual ratings also lead to the assumption that the simplifications are strong and of high quality. Therefore, this corpus can also be a great benefit for German TS. Furthermore, the manually aligned sentence pairs and the document pairs of DEPLAIN-WEB can be used for evaluating alignment algorithms across different domains. The alignment algorithm can then be used to automatically align the notaligned documents of DEPLAIN-WEB. We are showing this process in the next section. ## 5 Automatic Sentence-Wise Alignment To exemplify the usage of the manual alignments and to provide sentence-wise alignments for the unaligned documents of DEPLAIN-WEB we evaluate different alignment algorithms on the manually aligned data. ## 5.1 Alignment Methods We evaluated the following alignment methods: i) LHA (Nikolov and Hahnloser, 2019), ii) *SentenceTransformer* (Reimers and Gurevych, 2020) with *LaBSE*16 (Feng et al., 2022) and RoBERTa17 (Conneau et al., 2020) iii) *VecAlign* (Thompson and Koehn, 2020), iv) *BertAlign* (Liu and Zhu, 2022), v) *MASSAlign* (Paetzold et al., 2017), and vi) *CATS* (Štajner et al., 2018). Before testing any of these alignment methods, we investigated the implementation of their algorithms and checked for any room for adaptation to benefit our purpose.18 ## 5.2 Evaluation Of Alignment Methods We chose the subcorpus of DEPLAIN-WEB that has manual alignments and is open for sharing (second row in Table 1) for evaluating the methods, as it has a sufficient number of alignments representing different domains and different types of alignments (1:1 and n:m). The dataset comprises 147 16https://huggingface.co/sentence-transformers/LaBSE 17https://huggingface.co/T-Systems-onsite/cross-en-deroberta-sentence-transformer 18More details on our adaptations can be found in Appendix H. aligned pairs of documents, these complex-simple document pairs were split into 6,138 and 6,402 sentences respectively. The manual alignment of these sentences resulted in 2,741 alignments, comprising 1,750 1:1 alignments (out of which are 887 identical pairs), 804 1:m alignments, 77 n:1 alignments, and 110 n:m alignments.19 For evaluation, we treat the alignment task as a binary classification problem (either aligned or not aligned) and report precision, recall, and F1-score. We do not consider partial alignments within the evaluation. We argue that for curating a finetuning dataset for automatic text simplification systems, having an accurate alignment is more important than missing an accurate one, therefore we value precision over recall. Hence, we also measured the Fβ score with β = 0.5 which weighs precision more than recall. 1:1 n:m name P R F1 F0.5 **P R F**1 F0.5 LHA .94 .41 .57 .747 - - - - Sent-LaBSE **.961** .444 .608 **.780** - - - - Sent-RoBERTa .960 .444 .607 .779 - - - - CATS-C3G .247 **.553** .342 .278 - - - - VecAlign .271 .404 .323 .290 .260 .465 .333 .285 BERTalign .743 .465 .572 .664 .387 .561 .458 .412 MASSalign .846 .477 **.610** .733 **.819** .509 **.628 .730** ## 5.3 Results Three of our studied alignment methods can produce only 1:1 alignments (LHA, SentenceTransformer, CATS), and the other three methods can produce additionally n:m20 alignments (VecAlign, BertAlign, MASSAlign) Theoretically, our ideal aligner should be able to produce n:m alignments with high precision as splitting and merging are frequent in TS corpora. However, in our experiments, we observed that producing n:m alignments is a difficult task. We found that SentenceTransformer using the multilingual model LaBSE (Feng et al., 2022) got very high precise 1:1 results with a fair recall as well (see Table 3). On the other hand, MASSAlign performed the best on n:m results, and also with totally acceptable 1:1 results (see Table 3). Hence, we concluded that MASSAlign is the most suitable aligner for our use case as it i) produces n:m 19n and m are > 1 in this context 20where n or m equals to 1 in this context, but not both alignments and ii) has fairly high scores for 1:1 and n:m alignments. Therefore, we recommend MASSAlign to be used to automatically align the documents which the web crawler can scrape. ## 5.4 Corpus Statistics Running MASSAlign on our unaligned corpus of DEPLAIN-WEB results in 1,594 sentence alignments. Following statistics of the manually aligned part of DEPLAIN-WEB (1,846 aligned pairs of 6,138 complex sentences), theoretically, a perfect aligner should get on average a maximum of 30% alignments of the complex sentences, which corresponds to 5,980 sentence pairs on the not-aligned documents that we posses (with 19,934 complex sentences). However, as we set our experiments with the aim of getting a precise aligner that values quality over quantity, these expected numbers were much reduced in reality (to approx 8%). ## 6 Automatic Text Simplification To exemplify the usage of DEPLAIN for training and evaluating TS models, we are presenting results on finetuning *long-mBART* on our documentlevel corpus as well as finetuning *mBART* on our sentence-level corpus, using code provided by Rios et al. (2021) 21. ## 6.1 Data We have split the document and sentence pairs of DEPLAIN-APA and DEPLAIN-WEB into training, development, and testing splits, the sizes of all splits are provided in Appendix I. 22 We are reporting evaluation metrics on the test sets of DEPLAIN-APA, and DEPLAIN-WEB for both document- and sentence-level systems. More results on other test data sets can be found in Appendix J. ## 6.2 Evaluation Of Text Simplification For evaluation, we use the following automatic metrics provided in the evaluation framework EASSE (Alva-Manchego et al., 2019): for simplification, SARI (Xu et al., 2015), for quality and semantic similarity to the target reference, BERTScore Precision (BS-P) is reported (Zhang et al., 2019), and BLEU for meaning preservation (Papineni et al., 2002), following the recommendations of AlvaManchego et al. (2021) regarding TS evaluation on English texts. As our corpus has just one reference and not multiple as English TS corpora, e.g., Newsela (Xu et al., 2015) or ASSET (AlvaManchego et al., 2020a), SARI might not work as good as expected. Instead of Flesch-Kincaid Grading Level (FKGL) which is built for only English data, we are reporting the German version of Flesch-Reading-Ease (for readability).23 As baseline we use a src2src-baseline or identitybaseline, which As baseline we report the results of a src2src-baseline or identity-baseline in which the complex source sentences are just copied and, hence, the complex sentences are used as original and potential simplification data. We cannot use a reference baseline (tgt2tgt) as used in related works, because DEPLAIN has only one reference to evaluate against, hence, the scores would always result in the highest scores, e.g., 100 for SARI. ## 6.3 Results 6.3.1 Document-Level Text Simplification System The evaluation results of the document simplification systems are summarized in Table 4, In terms of SARI, all the fine-tuned models are outperforming the Identity baseline src2src on both test sets.24 However, against our hypothesis, the strong simplifications of the web data seems to be easier to be simplified (SARI>43) than the mild simplifications of the APA data (SARI>35). For BLEU score, the higher the score, the more of the content was copied (Chatterjee and Agarwal, 2021). So, if the BLEU score is less for the system outputs than for the identity baseline that means that the simplification system has changed something and not only copied. Therefore, it is reasonable that the BLEU score is higher for the src2src baselines than for some system outputs (see Xu et al. 2016, Sulem et al. (2018c), (Chatterjee and Agarwal, 2021)). A human evaluation is required in order to obtain a more reliable assessment of 23We are using metrics even if it is shown (e.g., for SARI (Alva-Manchego et al., 2021), BLEU (Sulem et al., 2018a) and FKGL (Tanprasert and Kauchak, 2021)) that they are not perfect for measuring simplification as no other more reliable metrics exist yet. A manual evaluation would be most reliable, but this is out of the scope of this work as we are evaluating the validity of the dataset and not the TS system. 24Further results on another test dataset are added to the subsection J.1). train data n SARI ↑ BLEU ↑ BS-P ↑ FRE ↑ ![8_image_0.png](8_image_0.png) ![8_image_3.png](8_image_3.png) DEplain-APA 387 **44.56 38.136 0.598 65.4** DEplain-web 481 35.02 12.913 0.475 59.55 DEplain-APA+web 868 42.862 36.449 0.589 65.4 src2Src-baseline 17.637 34.247 0.583 58.85 (a) DEPLAIN-APA test (n=48) $$\frac{\overline{{{\frac{28.25}{28.506}}}}}$$ $$\frac{34.818}{\frac{34.904}{15.249}}$$ $$\overline{{{\frac{0.639}{0.64}}}}$$ $$\frac{11941}{19941}$$ ![8_image_2.png](8_image_2.png) ![8_image_4.png](8_image_4.png) ![8_image_5.png](8_image_5.png) train data n SARI ↑ BLEU ↑ BS-P ↑ FRE ↑ ![8_image_1.png](8_image_1.png) DEplain-APA 10660 34.818 28.25 0.639 **63.072** DEplain-APA+web 11941 **34.904 28.506 0.64** 62.669 src2src-baseline 15.249 26.893 0.627 59.23 (a) DEPLAIN-APA test (n=1231) train data n SARI ↑ BLEU ↑ BS-P ↑ FRE ↑ DEplain-APA 387 43.087 21.9 0.377 **64.7** DEplain-web 481 49.584 23.282 **0.462** 63.5 DEplain-APA+web 868 **49.745 23.37** 0.445 57.95 src2Src-baseline 12.848 23.132 0.432 59.4 (b) DEPLAIN-WEB test (n=147) Table 4: Results on Document Simplification using finetuned long-mBART. n corresponds to the length of the training data. the quality of the results, as although those metrics are the traditionally used metrics in this area, they were originally designed for sentences evaluation, and not documents evaluation. ## 6.3.2 Sentence-Level Text Simplification System The evaluation results of the sentence level systems are summarized in Table 5. 25 These results can be seen as baselines for further experiments with the DEPLAIN corpus. Comparing the FRE of DEPLAIN-APA and DEPLAIN-APA+WEB on the two test sets, the DEPLAIN-APA test always achieves a higher FRE. Also the model that was trained on APA+web data did not make a big difference from the one trained only on APA when tested on the APA test set; the additional web data does not affect the model much in this case. However, when tested on the web test set, adding the web data to the training data has improved all the measured metrics. This supports that adding training data from different contexts leads to a better generalization of the model. The combination of DEPLAIN-APA+WEB achieves the highest scores in terms of all metrics. Although our data doesn't include simplifications for children, the SARI scores on the ZESTtest set are better than the reported models (increase by approx. 5 points on SARI, see appendix J.2, Table 16). This result might be due to issues with the automatic TS metrics. A manual evaluation is required to justify if finetuning mBART on DEPLAIN-APA+WEB can really simplify texts for children. The target group of the texts a model is trained on should always be considered in the 25Further results on other test data are added to Appendix J. $\left(\mathrm{n}\right)$=12 $\mathfrak{M}$ 0. train data n SARI ↑ BLEU ↑ BS-P ↑ FRE ↑ DEplain-APA 10660 30.867 15.727 0.413 64.516 DEplain-APA+web 11941 34.828 17.88 0.436 65.249 src2src-baseline 11.931 20.85 0.423 60.825 (b) DEPLAIN-WEB test (n=1846) $$\frac{\mathbf{B}\mathbf{S}\mathbf{-}\mathbf{P}\uparrow}{0.413}$$ $$\frac{\mathbf{0.436}}{0.423}$$ $$\frac{{\frac{{\mathrm{FRE}}\uparrow}{64.516}}}{{\frac{65.249}{60.825}}}$$ Table 5: Results on Sentence Simplification using finetuned mBART. n corresponds to the length of the training data. interpretation of model evaluation as each target group requires different simplification operations (Gooding, 2022). ## 7 Conclusion And Future Work In this paper, we have introduced a new German corpus for text simplification, called DEplain. The corpus contains data for document simplification as well as sentence simplification of news (DEPLAIN-APA) and web data (DEPLAIN-WEB). The major part of the sentence-wise alignments are manually aligned and a part of it is also manually analyzed. The analysis shows that the subcorpus DEPLAIN-APA contains rather mild simplifications whereas DEPLAIN-WEB contains rather strong simplifications. However, for both corpora, a large variety of simplification operations were identified. Furthermore, we evaluated automatic sentence alignment methods on our manually aligned data. In our experiments, MASSalign got the best results but it has only identified a few n:m alignments (where n > 1 or m > 1). One direction for future work is to further investigate n:m alignments algorithms for TS corporaand include paragraphs into the automatic alignment process. We also showed first promising experiments on sentence and document simplification. However, these are just simple benchmarks and have only been evaluated with automatic metrics yet. In future work, the results should be verified by manual evaluation and could be improved by using more sophisticated approaches. Finally, we think that DEPLAIN can boost and improve the research in German text simplification, to make more complex texts accessible to people with reading problems. ## Acknowledgments We gratefully acknowledge the support of the Austria Presse Agentur (Austrian Press Agency, APA) for providing us with the news documents and Maya Kruse for her work on annotating the corpus. We further thank NVIDIA for the access to GPUs. This research is part of the PhD-program "Online Participation", supported by the North Rhine-Westphalian (German) funding scheme "Forschungskolleg". ## Limitations However, our work shows some limitations. A major restriction of our data is the different licenses of our proposed dataset, i.e., i) DEPLAIN-APA can be obtained for free for research purposes upon request, ii) the manual aligned part of DE-PLAIN-WEB and the smaller part of the automatically aligned sentences are available under open licenses, e.g., CC-BY-4.0, iii) the other automatically aligned sentence pairs and their documents are not allowed to be shared. However, the web harvester and an automatic alignment method can be used to reproduce the document and sentence pairs. But, the web crawler does not perform well for all web pages and extracts one-token sentences. While this is not reflected in the manually aligned sentence pairs due to manual quality checks before alignment, this does not happen for the automatic alignment. As a consequential error, the text simplification model would also learn wrong sentence structures and simplifications from this data. Therefore, more time in automatic data cleaning is required. In addition, the web crawler is currently mostly extracting HTML documents and only a few PDF files. The corpus could be increased if existing parallel PDF files would be crawled and correctly extracted. Furthermore, the web crawler just harvest a given set of web pages and do not search in the whole web for parallel German complex-simplified documents such as a general web crawler. Currently, a general web crawler wouldn't add much more parallel data than the proposed one, as this data is scarce at the moment and, if parallel data is available, there is no link between complex and simplified documents and the title of the pages are often that different that they cannot be aligned automatically. However, if in future more parallel texts are available we would like to extend our corpus with a general web crawler to also include more variance within the domains. Compared to TS corpora in English, e.g., WikiAuto or Newsela-Auto (Jiang et al., 2020), DE-PLAIN is smaller and is not (fully) balanced in terms of domains. However, we believe that the current size of the dataset is already large enough due to the high proportion of professionally simplified texts and the high-quality of manual alignments. Further, we are aiming at increasing the corpus following our dynamic approach, e.g., by extending the capabilities of the web harvester or the alignment algorithms. Furthermore, the automatic alignment methods currently align mostly 1:1 alignments but n:m alignments are important for text simplification and should be considered for training. Therefore, the automatic alignment methods should be improved in the direction of n:m alignments. Until then, we would recommend using the automatically aligned sentence pairs only for additional training data. For document simplification, a GPU with at least 24 GB of memory is required to reproduce our results with mBART and at least 16 GB for sentence simplification respectively. However, for other approaches, e.g., unsupervised learning or zeroshot approaches, the experiments with DEPLAIN-APA and DEPLAIN-WEB on the document- and sentence-level could be performed with less memory. For the evaluation of our text simplification models, we are just reporting automatic alignments, although they are mainly built for English TS and their quality is not evaluated on German yet. Our datasets has just one reference and not multiple as in ASSET or TurkCorpus, therefore SARI might work not as good as with several references. In addition, in some work (e.g., Alva-Manchego et al. (2021)) it was already shown that the automatic metrics do not perfectly correlate with human judgments, hence, the results of the automatic metrics should be interpreted with caution. It is recommended to manually evaluate the results, but this was out of the scope of this work which mainly focuses on proposing a new dataset and not new TS models. ## Ethics & Impact Statement Data Statement The data statement for DEPLAIN is available here: https://github.com/rstodden/DEPlain. ## Nlp Application Statement Intended use. In this work, we propose a new corpus for training and evaluating text simplification models. It is intended to use this corpus for training text simplification models or related works, e.g., text style transfer. The resulting text simplification system is intended to be used to simplify texts for a given target group (depending on the training data). However, the generated simplifications of the TS model might have some errors, therefore they shouldn't be shown to a potentially vulnerable target group before manually verifying their quality and possibly fixing them. The text simplification system could be provided to human translators who might improve and timely reduce their effort in manually simplifying a text. Furthermore, the dataset can be reused for related tasks to TS, this includes, but is not limited to, text leveling, evaluation of alignment methods, evaluation of automatic TS metrics, and analysis of intralingual translations. Misuse potential & Failure modes. One potential misuse of DEPLAIN is to reverse the input order of the texts into a deep learning model. The resulting system would be able to make the texts even more complex than simplifying them. Although a system that would be developed for producing complex texts can be used for beneficial use cases (e.g., generation of more challenging texts for language learners), however, it could be used to obscure pieces of information from some kind of audience on purpose. Furthermore, the TS system could generate content with low similarity to the complex sentence given as input and, hence, change the meaning of the original text. Due to this, it should always be stated that the simplification is generated automatically and might not reflect the original meaning of the source text. Biases. No biases are known yet. Collecting data from users. When a researcher requests access to the DEPLAIN-APA corpus, the name, the institution, and the email address of the researcher are saved by the authors of the paper. This is required to make the use of the dataset transparent to the data provider, i.e., the Austrian Press Agency. Environmental Report. For the manual alignment and annotation of the corpus, a server with the text simplification tool has run all time during the annotation duration. For the evaluation of all alignment methods, we required less than 1 GPU hour on an NVIDIA RTX A5000 with 24 GB. The experiments with document and sentence simplification, overall, took less than 18 GPU hours on a NVIDIA RTX A5000 with 24 GB. ## References Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020a. ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 4668–4679, Online. Association for Computational Linguistics. Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 49–54, Hong Kong, China. Association for Computational Linguistics. Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2020b. Data-driven sentence simplification: Survey and benchmark. *Computational Linguistics*, 46(1):135–187. Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2021. The (un)suitability of automatic evaluation metrics for text simplification. Computational Linguistics, 47(4):861–889. Toni Amstad. 1978. *Wie verständlich sind unsere* Zeitungen? Studenten-Schreib-Service. PhD Thesis. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Ben Athiwaratkun, Andrew Wilson, and Anima Anandkumar. 2018. Probabilistic FastText for multi-sense word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–11, Melbourne, Australia. Association for Computational Linguistics. Dennis Aumiller and Michael Gertz. 2022. Klexikon: A German dataset for joint summarization and simplification. In *Proceedings of the Thirteenth Language Resources and Evaluation Conference*, pages 2693–2701, Marseille, France. European Language Resources Association. Alessia Battisti, Dominik Pfütze, Andreas Säuberli, Marek Kostrzewa, and Sarah Ebling. 2020. A corpus for automatic readability assessment and text simplification of German. In *Proceedings of the Twelfth Language Resources and Evaluation Conference*, pages 3302–3311, Marseille, France. European Language Resources Association. Stefan Bott and Horacio Saggion. 2014. Text Simplification Resources for Spanish. Language Resources and Evaluation, 48(1):93–120. Dominique Brunato, Lorenzo De Mattei, Felice Dell'Orletta, Benedetta Iavarone, and Giulia Venturi. 2018. Is this sentence difficult? do you agree? In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2690–2699, Brussels, Belgium. Association for Computational Linguistics. Dominique Brunato, Felice Dell'Orletta, Giulia Venturi, and Simonetta Montemagni. 2015. Design and annotation of the first Italian corpus for text simplification. In *Proceedings of the 9th Linguistic Annotation Workshop*, pages 31–41, Denver, Colorado, USA. Association for Computational Linguistics. Klaus Buddeberg and Anke Grotlüschen. 2020. LEO 2018: Leben mit geringer Literalität. wbv. Rémi Cardon, Adrien Bibal, Rodrigo Wilkens, David Alfter, Magali Norré, Adeline Müller, Watrin Patrick, and Thomas François. 2022. Linguistic corpus annotation for automatic text simplification evaluation. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 1842–1866, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Niladri Chatterjee and Raksha Agarwal. 2021. Depsym: A lightweight syntactic text simplification approach using dependency trees. In Proceedings of the First Workshop on Current Trends in Text Simplification (CTTS 2021), pages 42–56. CEUR-WS. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Liam Cripwell, Joël Legrand, and Claire Gardent. 2023. Document-level planning for text simplification. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 993–1006, Dubrovnik, Croatia. Association for Computational Linguistics. Sarah Ebling, Alessia Battisti, Marek Kostrzewa, Dominik Pfütze, Annette Rios, Andreas Säuberli, and Nicolas Spring. 2022. Automatic text simplification for german. *Frontiers in Communication*, 7. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Itziar Gonzalez-Dios, María Jesús Aranzabe, and Arantza Díaz de Ilarraza. 2018. The corpus of Basque simplified texts (CBST). *LANGUAGE RESOURCES AND EVALUATION*, 52(1):217–247. Sian Gooding. 2022. On the ethical considerations of text simplification. In *Ninth Workshop on Speech* and Language Processing for Assistive Technologies (SLPAT-2022), pages 50–57, Dublin, Ireland. Association for Computational Linguistics. Silvia Hansen-Schirra, Jean Nitzke, and Silke Gutermuth. 2021. *An Intralingual Parallel Corpus of* Translations into German Easy Language (Geasy Corpus): What Sentence Alignments Can Tell Us About Translation Strategies in Intralingual Translation, pages 281–298. Springer Singapore, Singapore. Freya Hewett and Manfred Stede. 2021. Automatically evaluating the conceptual complexity of German texts. In *Proceedings of the 17th Conference* on Natural Language Processing (KONVENS 2021), pages 228–234, Düsseldorf, Germany. KONVENS 2021 Organizers. Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7943–7960, Online. Association for Computational Linguistics. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1537–1546, Sofia, Bulgaria. Association for Computational Linguistics. Joongwon Kim, Mounica Maddela, Reno Kriz, Wei Xu, and Chris Callison-Burch. 2021. BiSECT: Learning to split and rephrase sentences with bitexts. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 6193– 6209, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. David Klaper, Sarah Ebling, and Martin Volk. 2013. Building a German/simple German parallel corpus for automatic text simplification. In Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 11–19, Sofia, Bulgaria. Association for Computational Linguistics. Anaïs Koptient, Rémi Cardon, and Natalia Grabar. 2019. Simplification-induced transformations: typology and some characteristics. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 309–318, Florence, Italy. Association for Computational Linguistics. Lei Liu and Min Zhu. 2022. Bertalign: Improved word embedding-based sentence alignment for Chinese–English parallel corpora of literary texts. *Digital Scholarship in the Humanities*. Fqac089. Christiane Maaß. 2020. *Easy Language - Plain Language - Easy Language Plus. Balancing Comprehensibility and Acceptability*. Easy - Plain - Accessible. Frank & Timme, Berlin. Mounica Maddela, Fernando Alva-Manchego, and Wei Xu. 2021. Controllable text simplification with explicit paraphrasing. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 3536–3553, Online. Association for Computational Linguistics. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2020. Zero-shot crosslingual sentence simplification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5109–5126, Online. Association for Computational Linguistics. Robert Marzari. 2010. *Leichtes Englisch, schwieriges* Französisch, kompliziertes Russisch: Evaluation der Schwierigkeiten des Englischen, Deutschen, Französischen, Italienischen, Spanischen, Russischen und Polnischen als Fremdsprache. Schiler. Mary L McHugh. 2012. Interrater reliability: the kappa statistic. *Biochemia medica*, 22(3):276–282. Babak Naderi, Salar Mohtaj, Kaspar Ensikat, and Sebastian Möller. 2019. Subjective assessment of text complexity: A dataset for german language. Nikola I. Nikolov and Richard Hahnloser. 2019. Largescale hierarchical alignment for data-driven text rewriting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 844–853, Varna, Bulgaria. INCOMA Ltd. Gustavo Paetzold, Fernando Alva-Manchego, and Lucia Specia. 2017. MASSAlign: Alignment and annotation of comparable documents. In Proceedings of the IJCNLP 2017, System Demonstrations, pages 1–4, Tapei, Taiwan. Association for Computational Linguistics. Gustavo Henrique Paetzold and Lucia Specia. 2016. Vicinity-driven paragraph and sentence alignment for comparable corpora. *CoRR*, abs/1612.04113. Alessio Palmero Aprosio, Sara Tonelli, Marco Turchi, Matteo Negri, and Mattia A. Di Gangi. 2019. Neural text simplification in low-resource conditions using weak supervision. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 37–44, Minneapolis, Minnesota. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Stefan Paun. 2021. Parallel text alignment and monolingual parallel corpus creation from philosophical texts for text simplification. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 40–46, Online. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512–4525, Online. Association for Computational Linguistics. Annette Rios, Nicolas Spring, Tannon Kew, Marek Kostrzewa, Andreas Säuberli, Mathias Müller, and Sarah Ebling. 2021. A new dataset and efficient baselines for document-level text simplification in German. In Proceedings of the Third Workshop on New Frontiers in Summarization, pages 152–161, Online and in Dominican Republic. Association for Computational Linguistics. Andreas Säuberli, Sarah Ebling, and Martin Volk. 2020. Benchmarking data-driven automatic text simplification for German. In *Proceedings of the 1st Workshop* on Tools and Resources to Empower People with REAding DIfficulties (READI), pages 41–48, Marseille, France. European Language Resources Association. Thorben Schomacker. 2023. Aligned narrative documents. https://github.com/tschomacker/ aligned-narrative-documents. Melanie Siegel, Dorothee Beermann, and Lars Hellan. 2019. Aspects of linguistic complexity: A german - norwegian approach to the creation of resources for easy-to-understand language. In *11th International Conference on Quality of Multimedia Experience QoMEX 2019, Berlin, Germany, June 5-7, 2019*, pages 1–3. IEEE. Nicolas Spring, Marek Kostrzewa, Annette Rios, and Sarah Ebling. 2022. Ensembling and score-based filtering in sentence alignment for automatic simplification of german texts. In Universal Access in HumanComputer Interaction. Novel Design Approaches and Technologies, pages 137–149, Cham. Springer International Publishing. Nicolas Spring, Annette Rios, and Sarah Ebling. 2021. Exploring German multi-level text simplification. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1339–1349, Held Online. INCOMA Ltd. Sanja Štajner. 2021. Automatic text simplification for social good: Progress and challenges. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 2637–2652, Online. Association for Computational Linguistics. Sanja Štajner, Marc Franco-Salvador, Simone Paolo Ponzetto, Paolo Rosso, and Heiner Stuckenschmidt. 2017. Sentence alignment methods for improving text simplification systems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 97–102, Vancouver, Canada. Association for Computational Linguistics. Sanja Štajner, Marc Franco-Salvador, Paolo Rosso, and Simone Paolo Ponzetto. 2018. CATS: A tool for customized alignment of text simplification corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Regina Stodden. 2021. When the scale is unclear - analysis of the interpretation of rating scales in human evaluation of text simplification. In *Proceedings of* the First Workshop on Current Trends in Text Simplification (CTTS 2021), pages 84–95. CEUR-WS. Regina Stodden and Laura Kallmeyer. 2022. TSANNO: An annotation tool to build, annotate and evaluate text simplification corpora. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 145–155, Dublin, Ireland. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. BLEU is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744, Brussels, Belgium. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685–696, New Orleans, Louisiana. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018c. Simple and effective text simplification using semantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 162–173, Melbourne, Australia. Association for Computational Linguistics. Renliang Sun, Zhe Lin, and Xiaojun Wan. 2020. On the helpfulness of document context to sentence simplification. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 1411– 1423, Barcelona, Spain (Online). International Committee on Computational Linguistics. Teerapaun Tanprasert and David Kauchak. 2021. Flesch-kincaid is not a text simplification evaluation metric. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 1–14, Online. Association for Computational Linguistics. Brian Thompson and Philipp Koehn. 2020. Exploiting sentence order in document alignment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5997–6007, Online. Association for Computational Linguistics. Vanessa Toborek, Moritz Busch, Malte Boßert, Christian Bauckhage, and Pascal Welke. 2022. A new aligned simple german corpus. Jan Trienes, Jörg Schlötterer, Hans-Ulrich Schildhaus, and Christin Seifert. 2022. Patient-friendly clinical notes: Towards a new text simplification dataset. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), pages 19–27, Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics. Jan Trienes and Laura Vásquez-Rodríguez. 2023. Text simplification datasets. https://github.com/ jantrienes/text-simplification-datasets. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. *Transactions of the Association for Computational Linguistics*, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675. ## Appendix A Overview Of Existing German Ts Corpora In Table 6, an overview of existing German text simplification corpora is shown. For Hewett and Stede (2021) we report the numbers from the updated version of March 2022. ## B Worse Examples Of Apa-Lha In the APA-LHA corpus (Spring et al., 2021) we found original sentences repeated several times in the training data aligned to multiple different simplified sentences (in Table 7 called "simplifications"). This format can either comprises different simplifications for the same complex sentence or a split of one complex sentence into several simple sentences. If we see these simplifications as alternative simplifications, some original-simple pairs seems to be wrongly aligned. In Table 7, we show two alignment pairs in which the meaning is heavily changed. In the first example, the original and all simplifications are related to career but at different states, i.e., looking back at the career, starting the career and quitting the career. In the second example, the terms of unemployment and short time are mixed and also the numbers are totally different. In row three and four of Table 7, we provide more examples regarding the unclear format, it is not clear whether pairs with identical complex sentences are alternative simplifications (references) (see row 3) or 1:m alignments (see row 4). ## C Examples Of Mild And Strong Simplifications In Table 8, some examples of strong and mild simplifications of DEPLAIN are provided including English translations. ## D Inter-Annotator Agreement In Table 9, we show an overview of the interannotator agreement per domain. ## E Details On Deplain-Web In this section, we will describe more details on the web harvester and the process of creating the dataset. ## E.1 Overview Of Web Pages The web pages in Table 10 were crawled for generating DEPLAIN-WEB. We selected these pages based on a web research regarding web pages in German plain language ("Einfache Sprache"). We further checked the references of translation offices, e.g., which web pages are simplified by them and if they contain parallel alignments. ## E.2 Document Alignment The documents are aligned with three strategies in the following order: i) automatic alignment by the reference to the simple document within the complex documents, ii) automatically matching the titles of the documents on the website, and iii) aligning the documents manually. All the books in the fiction domain were manually aligned on the document level as the complex data is provided on another web page (i.e. Projekt Gutenberg26) than the simplified data (i.e., Spaß am Lesen Verlag27, Passanten Verlag28, or NDR29). For the simple books, only a preview was available, therefore we only added the first section of the complex book. If the simplified book summarizes parts of more than the first chapter, the documents might be not comparable. We haven't checked that manually. In addition, to download and align the HTML files, the web crawler also extracts some metadata, the plain text of the documents, and the plain text including paragraph endings. To further align the documents on paragraph or sentence-level the crawler can be integrated into existing alignment tools, e.g., TS-anno (Stodden and Kallmeyer, 2022). ## E.3 Technical Details. We used Python 3 and the Python Package Beautiful Soup for the HTML data and pymupdf for the PDF data. The code of the web crawler is freely available under the CC-BY-4.0 license and can be accessed via https://github.com/ rstodden/DEPlain. Because the texts are on editable websites the content may change. Therefore we note the date when we crawled the data, the data can then 26https://www.projekt-gutenberg.org/ 27https://einfachebuecher.de/ 28https://www.passanten-verlag.de/ 29https://www.ndr.de/fernsehen/ barrierefreie_angebote/leichte_ sprache/Maerchen-in-Leichter-Sprache, maerchenleichtesprache100.html | Reference | Name | Target Simple | Domain | Availability | # Docs | # Sent. Complex | # Sent. Simple | # Aligned Pairs | Alignment | |------------------------------------------------------------------|----------------------|------------------------------------------|----------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------|-------------------|------------------|--------------------|-------------| | Siegel et al. (2019) | leichte-sprachecorpus | mixed | web | https://github.com/ hdaSprachtechnologie/ easy-to-understand_ language | 351 | | | | | | (2021) | Lexica-corpusklexikon | children between 6-12 | wikipedia https://github.com/ | | | | | | | | Hewett | and | Stede | 1090 | auto | | | | | | | fhewett/lexica-corpus | | | | | | | | | | | (2021) | Lexica-corpusminiklexikon | children younger than 6 | wikipedia https://github.com/ | | | | | | | | Hewett | and | Stede | 1090 | auto | | | | | | | fhewett/lexica-corpus | | | | | | | | | | | Rios et al. (2021)∗ | 20Minuten | general | news | https://github.com/ | 18305 | ? | | | | | ZurichNLP/20Minuten | | | | | | | | | | | (2022) | Klexikon | children between 6-12 | wikipedia https://github.com/ | | | | | | | | Aumiller and Gertz | 2898 | 701577 | 94214 | auto | | | | | | | dennlinger/klexikon | | | | | | | | | | | Ebling et al. (2022) | Wikipedia-Corpus | A2 | wikipedia - | 106126 6933192 | 1077992 | ? | | | | | Trienes et al. (2022) † | simple-patho | laypeople | medical | https://github.com/ | 851 | 23,554 | 28,155 | 2,280 (paragraphs) | manual | | jantrienes/simple-patho (not yet available) | | | | | | | | | | | Schomacker (2023) | MILS+EB+PV+KV mixed | fiction | https://github. com/tschomacker/ aligned-narrative-documents (not yet available) | | | | | | | | (a) Overview of German simplification corpora on document-level. | | | | | | | | | | | Reference | Name | Target Simple | Domain | Availability | # Docs # Sent. Complex # Sent. Simple # Aligned Pairs Alignment | | | | | | Klaper et al. (2013) | Klaper | Leichte Sprache | web | upon request | 256 | approx. 2000 | manual&auto | | | | Naderi et al. (2019) | TextComplexityDE19 Non-native speaker (written by non-native speaker) wikipedia https://github. | 23 | 250 | manual | | | | | | | com/babaknaderi/ TextComplexityDE | | | | | | | | | | | Battisti et al. (2020); Ebling et al. (2022) Web | A2 | web | - | 378 | 17121 | 21072 | CATS | | | | Mallinson et al. (2020) ZEST-data | children between 5-7 | science for children https://github.com/ | 20 | 1198 | manual | | | | | | Säuberli et al. (2020); | Jmallins/ZEST-data | | | | | | | | | | Ebling et al. (2022) ‡ | APA-benchmark | B1 | news | - | 3616 | CATS-WAVG | | | | | Kim et al. (2021) | BiSECT | web & politics | https://github.com/ | 186,237 | | | | | | | (2021) | GEASY | Leichte Sprache | mixed | - | 93 | 1596 | 4090 | memsource & | | | mounicam/BiSECT | | | | | | | | | | | Hansen-Schirra et al. | (commercial) | | | | | | | | | | Spring et al. (2021); Ebling et al. (2022) ‡ APA-LHA-or-a2 | A2 | news | https://zenodo.org/record/ 2426 | 60732 | 30432 | 9456 | LHA | | | | Spring et al. (2021); | 5148163 | | | | | | | | | | Ebling et al. (2022) ‡ | APA-LHA-or-b1 | B1 | news | https://zenodo.org/record/ 2426 | 60732 | 30328 | 10268 | LHA | | | 5148163 | | | | | | | | | | | Spring et al. (2021); Ebling et al. (2022) capito | B1 | news | - | 1055 | 183216 | 68529 | 54224 | LHA | | | Spring et al. (2021); Ebling et al. (2022) capito | A2 | news | - | 1546 | 183216 | 168950 | 136582 | LHA | | | Spring et al. (2021); Ebling et al. (2022) capito | A1 | news | - | 839 | 183216 | 24243 | 10952 | LHA | | | Toborek et al. (2022) | Simple German Corpus A1 | web | https://github. | 530 | 5889 | CATS | | | | | com/buschmo/ Simple-German-Corpus | | | | | | | | | | (b) Overview of German simplification corpora on sentence-level. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) ![15_image_3.png](15_image_3.png) ![15_image_4.png](15_image_4.png) ![15_image_6.png](15_image_6.png) Reference Name **Target Simple Domain Availability # Docs # Sent. Complex # Sent. Simple # Aligned Pairs Alignment** DEPLAIN-APA ‡ A2 news https://github.com/ rstodden/DEPlain 483 14,071 (aligned) 16,505 (aligned) 13122 manual DEPLAIN-WEB mixed web https://github.com/ rstodden/DEPlain 147 (+609) 2,287 (aligned) 4,009 (aligned) 1856 (+1594) manual&auto (c) Overview of DEPLAIN, the proposed German simplification corpus on document- and sentence-level. ![15_image_5.png](15_image_5.png) Table 6: Overview of German simplification corpora. All corpora contain German languages (no dialect specified) except 20Minuten (see ∗, Swiss German, de-CH), APA-benchmark, APA-LHA, DEPLAIN-APA (see ‡, Austrian German, DE-AT). All source texts from all corpora address a general audience, except simple-path (see †). ![15_image_7.png](15_image_7.png) | complex-ids | Error Description | Original | Simplifications "Er begann seine Karriere mit 18 Jahren .", | |------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| | A2#35;A2#235 | Original and all simplifications are related to "Karriere" (career) but with different meaning. | "Auf seine Karriere blickte er "Er beendet jetzt seine Karriere ." gerne zurück .", "Derzeit sind in Österreich auch noch 1,3 Millionen Menschen in Kurz-Arbeit .", "Die Kurz-Arbeit gibt es , damit nicht noch mehr Menschen ihre Arbeit verlieren .", "Dort gibt es jetzt 5.000 Arbeitslose weniger als vor einer Woche .", "Vor einer Woche waren noch 9.000 Menschen mehr arbeitslos .", "Ihr Geld bekommen sie aber nicht mehr von den Firmen , sondern vom Staat ." | | | The simplifications contain a mix of i) "unemployment" and "short time | | | | | A2#2621;A2#265; A2#6530;A2_dev#85; A2_dev#417 work", ii) different digits, and iii) misalignment across documents. | "Derzeit sind damit aber noch immer über 123.000 Personen mehr arbeitslos als vor der Coronakrise ." "Laut Polizei griff der Bub im Garten nach dem Hundefutter .", "Dabei kam es zu der Attacke ." | | | | B1#3392;B1#6763 | The simplifications can be "Laut Polizei dürfte das Kind interpreted as a split of the nach dem im Garten für den original sentence into two Vierbeiner abgelegten Futter sentences (one 1:m simplification). gegriffen haben , als es zu der Attacke kam ." "Menschen können Hunde und Katzen mit dem Corona-Virus anstecken .", "Hunde und Katzen können mit dem Corona-Virus angesteckt werden .' | | | | A2#246; A2#5966 | The simplifications can be interpreted as alternative simplifications (two 1:1 simplifications). "Menschen können Hunde und Katzen mit Coronavirus anstecken" (a) Original German version. | | | | complex-ids | Error Description | Original | Simplifications "He began his career at the age of 18 .", | | A2#35;A2#2352 | Original and all simplifications are related to "Karriere" (career) but with different meaning. | "He looks back on his career | "He is now ending his career ." | | with pleasure ." | "Currently, 1.3 million people in Austria are also still in short-time work .", "The short-time work exists so that more people do not lose their jobs .", | | | | "At present , however , this | "There are now 5,000 fewer unemployed there than a week ago .", | | | | still leaves over 123,000 more "A week ago , 9,000 more people were unemployed .", people unemployed than before the Corona crisis ." "But they no longer get their money from the companies , but from the state." | | | | | The simplifications contain a mix of i) "unemployment" and "short time | | | | | A2#2621;A2#265; A2#6530;A2_dev#85; A2_dev#417 work", ii) different digits, and iii) misalignment across documents. | "According to police , the boy reached for the dog food in the garden .", In the process, the attack occurred ." | | | | B1#3392;B1#6763 | The simplifications can be 'According to police , the interpreted as a split of the child may have reached for original sentence into two the food placed in the garden sentences (one 1:m simplification). for the quadruped , when it came to the attack ." "Humans can infect dogs and cats with the Corona virus . ", "People can infect dogs and "Dogs and cats can be infected with the Corona virus ." cats with coronavirus" | | | | A2#246;A2#5966 | The simplifications can be interpreted as alternative simplifications (two 1:1 simplifications). | (b) Translated English version. | | original simplification original (English) simplification (English) domain OL SL source ![17_image_4.png](17_image_4.png) ![17_image_1.png](17_image_1.png) ![17_image_2.png](17_image_2.png) mild Da entstand Helligkeit. Und es wurde hell. That's when brightness arose. And it became bright. bible C2 A1 Offene Bibel ![17_image_0.png](17_image_0.png) health C2 A2 Wort & Bild ![17_image_3.png](17_image_3.png) GmbH & Co. KG news B1 A2 Austria Press fiction C2 A2 Spaß am | domain | avg. | std. | interpretation | # sents | # docs | |------------------|--------|--------|------------------|-----------|----------| | bible | 0.7011 | 0.31 | moderate | 6903 | 3 | | fiction | 0.6131 | 0.39 | moderate | 23289 | 3 | | health | 0.5147 | 0.28 | weak | 13736 | 6 | | language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 | | news | 0.7497 | 0.28 | moderate | 25224 | 10 | | all | 0.8505 | 0.23 | strong | 87645 | 87 | be extracted using a web archieve, e.g., https: //archive.org/web/. It might be possible that the data is updated or that some sources are not available anymore. For each source, we aimed at downloading the complete relevant content of the websites. We removed parts such as navigation, advertisement, contact data, and other unnecessary stuff. ## F Deplain Alignment Statistics In this section, we show statistics of DEPLAIN regarding the alignment of the manual aligned documents. Table 11 summarize numbers of DEPLAIN-APA and DEPLAIN-WEB with respect to n : m alignments, where n and m are > 0, including rephrasing, splitting, merging, and fusion. Table 12 summarize numbers of DEPLAIN-APA and DEPLAIN-WEB with respect to n : m alignments, where n and m are ≤ 1, including copied sentences from the complex to the simplified document as well as deletions in the complex documents and additions in the simplified documents. These oddment pairs were automatically extracted after the documents were manually aligned. ## G Manual Evaluation Rating Aspects In this section, we summarize the aspects used for manual evaluation as well as the accompanied statements. The statements are translated to English, they were shown to the annotators in German. All of the aspects were rated on a 5-point Likert scale, either from -2 to +2 or 1 to 5. An overview of the aspects is shown in Table 13. ## H Description Of Adaptations Of Alignment Methods In this section, we are describing the alignment methods and the adaptations we made. LHA (Nikolov and Hahnloser, 2019) is an unsupervised method that finds 1:1 sentence alignments in monolingual parallel corpora where documents don't need to be aligned beforehand. It works with a hierarchical strategy by aligning documents on the first level and then aligning sentences within these documents. This method was recommended by Ebling et al. (2022) for aligning German parallel documents. subcorpus **website simple website complex simple complex domain description # doc.** EinfacheBücher https://einfachebuecher.de/ https://www.projekt-gutenberg. org/ PG SG/OG fiction Books in plain German 15 EinfacheBücherPassanten https://www.passanten-verlag.de/ https://www.projekt-gutenberg. org/ PG SG/OG fiction Books in plain German RS: 4 ApothekenUmschau https://www.apotheken-umschau.de/ ![18_image_1.png](18_image_1.png) PG SG health Health magazine in which diseases ![18_image_2.png](18_image_2.png) BZFE https://www.bzfe.de/ einfache-sprache/ †https://www.bzfe.de PG SG health Information of the German Federal Alumniportal https://www. services/sitemap/ † PG PG language learner Texts related to Germany and German traditions written for language Lebenshilfe https://www. inhalt/ ETR SG accessibility 49 Bibel https://offene-bibel.de/ †‡ https://offene-bibel.de/ ETR SG bible Bible texts in easy-to-read German 221 NDR-Märchen https://www.ndr.de/ EinfachTeilhaben https://www.einfach-teilhaben.de/ html https://www.einfach-teilhaben.de ETR SG accessibility 67 StadtHamburg https://www.hamburg.de/ hamburg-barrierefrei/ leichte-sprache/ https://www.hamburg.de ETR SG public authority Information of and regarding the StadtKöln https://www.stadt-koeln.de/ https://www.stadt-koeln.de ETR SG public authority Information of and regarding the ![18_image_0.png](18_image_0.png) ETR SG/OG fiction Fairytales in easy-to-read German 10 ``` Name # pairs 1:1 (rephrase) 1:n (split) n:1 (merge) n:m (fusion) DEPLAIN-APA 13122 9912 2360 382 468 DEPLAIN-WEB 1846 863 796 77 110 ``` Table 11: Statistics on n : m alignments on manual aligned documents, where n and m are > 0. Name $$\begin{array}{|l|l|}\text{}\#\text{pairs}&1\\ \hline\text{DEPLAIN-APA}&12353&1\\ \text{DEPLAIN-WEB}&5482&8\\ \end{array}$$ 0:1 (identical) (addition) 1:0 (deletion) DEPLAIN-APA 12353 2712 3964 5677 DEPLAIN-WEB 5482 887 1572 3050 Table 12: Statistics of additional n : m aligned pairs on manual aligned documents, where n and m are ≤ 1. Our adaptation of this method is comprised of: i) disabling the first level of aligning the documents as we already had the true document alignments, and ii) modifying the language-dependent tools used within the algorithm to fit the German language (e.g., the stopwords list, the tokenizer model, and the word embeddings model). Sentence Transformer (Reimers and Gurevych, 2020) is a simple straightforward method to find 1:1 sentence alignments by computing cosine similarity between embeddings vectors (produced by a sentence transformer model) of sentences on both sides of the monolingual parallel corpora, and then picking the most similar pairs and labeling them as aligned. This method is totally dependent on the used sentence transformer model, and the similarity threshold to set. Our adaptation of this method is comprised of: i) testing with new and different sentence transformer models that are either multi-lingual, i.e, LaBSE30 (Feng et al., 2022), or specially designed for German, RoBERTa31 (Conneau et al., 2020), 30https://huggingface.co/sentence-transformers/LaBSE 31https://huggingface.co/T-Systems-onsite/cross-en-de- | item | Statement | |--------|-------------| | Grammaticality The simplified sentence is fluent, and there are no grammatical errors. Grammaticality (original) The original sentence is fluent, there are no grammatical errors. Simplicity (simple) The simplified sentence is easy to understand. Simplicity (original) The original sentence is easy to understand. Coherence (simple) The simplified sentence is understandable without reading the whole paragraph. Coherence (original) The original sentence is understandable without reading the whole paragraph. Meaning The simplified sentence adequately expresses the meaning of the original sentence, perhaps omitting the least Preservation important information. Overall Simplicity The simplified sentence is easier to understand than the original sentence. Structural Simplicity The structure of the simplified sentence is easier to understand than the structure of the original sentence. Lexical Simplicity The words of the simplified sentence are easier to understand than the words of the original sentence. | | ![18_image_3.png](18_image_3.png) and ii) testing with different threshold values, i.e, 0.7, 0.75, 0.8, 0.9. We got the best precision score (*precision=0.96*) when we used LaBSE at a threshold value of 0.9. Vecalign (Thompson and Koehn, 2020) is a bilingual sentence alignment method that was designed to align sentences in documents of different languages, however, it was also tested in other works on monolingual parallel corpora (e.g., Spring et al. (2022)). It has two main advantages, it can produce n:m alignments, and it can work with more than 200 languages (as it uses the LASER32 (Artetxe and Schwenk, 2019) sentence representation model in the background; which is multilingual). We used this model only for sentence alignment and not document alignment. BertAlign (Liu and Zhu, 2022) is a attempt to allow sentence-transformer-based methods to produce n:m alignments. It was tested on ChineseEnglish parallel corpora and showed promising results. Our adaptation of this method was only by using a dedicated German sentence transformer model in the algorithm procedure. Following its outperforming results in the sentence transformers experiment, we used the LaBSE sentence transformer model in this experiment. MASSAlign (Paetzold et al., 2017) is a Python package which includes an easy-to-use alignment method on the paragraph- and sentence-level by Paetzold and Specia (2016). The method uses a vicinity-driven approach with a similarity matrix based on a TF-IDF model. It is capable of 1:1, 1:m, and n:1 alignments. Our adaptation to this model are: i) updating from Python 2 to Python 3 ii) making it more language-independent by flexibly adding a stop word list in the required language. We don't use the updated version of MASSAlign with Doc2Vec by Paun (2021) as we only align on the sentence level. In the first experiment, we found out that paragraph alignment is also required for this algorithm. CATS (Štajner et al., 2018) is an alignment method that can also align paragraphs and sentences. CATS align each original sentence with the closest simple sentence by calculating the similarity of all of them based on n-grams (option: C3G) or word vectors (option: CWASA and WAVG). In our experiments CATS only aligned pairs of type 1:1. Officially the code was published in Java33, for better integrity with the other alignment methods, we used the existing Python version of it34. We did experiments with all three options, for the word vectors we used the German embeddings of fasttext35 (Athiwaratkun et al., 2018). We just report the best result which was achieved with C3G. ## I Train, Dev, Test Split For Simplification Table 14 provides the size of train, development and test set of DEPLAIN. | document-level | sentence-level | | | | | | |------------------|------------------|---------|-----|------|---------|--------------------| | WEB | APA | APA+WEB | WEB | APA | APA+WEB | | | train | 481 | 387 | 868 | 1281 | 10660 | 11941 (10660+1281) | | dev | 122 | 48 | 170 | 313 | 1231 | 1544 (1231+313) | | test | 147 | 48 | - | 1846 | 1231 | - | Table 14: Overview of train/dev/test split. ## J Further Results For Simplification. We present results of our models trained on DE-PLAIN on existing test sets for German text simplification. In subsection J.1, results are shown regarding document simplification and, in subsection J.2, regarding sentence simplification. ## J.1 Results On Document Simplification. Table 15 shows results of our document-level TS experiments trained on different parts of DEplain using long-mBART with vocabulary reduced to 35k tokens. APA correspond to DEPLAIN-APA and web to DEPLAIN-WEB. For a better comparison, we also add the results of a baseline model (last part) and a comparable model reported in Rios et al. (2021) (first part, numbers are copied from them). train data **n SARI BLEU BS-P FRE** 20min 18305 **33.29 6.29** DEplain-APA 387 22.805 1.706 0.03 63.9 DEplain-web 481 27.113 1.81 0.007 63.5 DEplain-APA+web 868 24.265 1.804 0.029 64 src2src 1.953 2.051 0.029 54.45 Table 15: Results on Document Simplification Testing on 20min with long-mBART ## J.2 Results On Sentence Simplification In this section, we present results on existing test sets, i.e., *ZEST* (Mallinson et al., 2020) (see Table 16), *APA-LHA C2-A2* (Spring et al., 2021) (see Table 17), *APA-LHA C2-B1* (Spring et al., 2021) (see Table 18), and *TCDE19* (Naderi et al., 2019) (see Table 19). SARI BLEU BS-P FRE ZEST 39.09 56.68 - - U-NMT 35.22 52.02 - - U-SIMP 40.0 61.1 - - mBART-APA **45.81 56.802** 0.769 67.282 mBART-APA+web 44.913 54.718 0.778 66.588 src2src 26.812 67.116 0.856 61.5 Table 16: Results on the test set of *ZEST*. SARI BLEU BS-P FRE Sockeye **42.04 15.2** - - mBART-APA 27.987 5.294 0.232 57.865 mBART-APA+web 28.468 5.464 0.236 56.969 src2src 4.092 3.635 0.184 44.9 Table 17: Results on the test set of *APA-LHA C2-A2*. SARI BLEU BS-P FRE Sockeye **40.73 12.3** - - mBART-APA 29.086 6.495 0.272 57.299 mBART-APA+web 28.527 6.604 0.273 56.848 src2src 5.325 6.18 0.236 44.9 Table 18: Results on the test set of *APA-LHA C2-B1*. In each of the tables, the first part includes the results of other models trained on other data than DEPLAIN, the middle part includes the results of our models trained on DEPLAIN, and the last part is the result of the baseline model. The scores of the other models are extracted from the corresponding papers, we do not calculate them ourselves as the model checkpoints or model predictions are not available. Hence, scores of some metrics are missing if they were not reported, e.g., FRE or BERTScore. Furthermore, different implementations of the metrics might be used, therefore the scores should be interpreted with caution. SARI BLEU BS-P FRE ZEST **41.12 21.11** - - U-NMT 35.97 11.72 - - U-SIMP 37.4 15.03 - - mBART-APA 38.964 16.85 0.539 44.85 mBART-APA+web 36.937 16.321 0.542 43.65 src2src 14.999 27.348 0.546 28.1 ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? Not numbered, after the conclusion called "limitations" ✓ A2. Did you discuss any potential risks of your work? not numbered, Ethics & Impact Statement ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and 1 introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Creation: 3, 4, 5, Usage: 6. ✓ B1. Did you cite the creators of artifacts you used? 6: test sets and appendix: overview ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not numbered, Ethics & Impact Statement ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? section 6: test sets for simplification. Not numbered: Ethics & Impact Statement ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Details on DEplain-web ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 3, 4, 5 and appendix: Details on DEplain-web ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Table 1, plus Tables in appendix. Also in 5. ## C ✓ **Did You Run Computational Experiments?** 5, 6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Ethics & Impact Statement, appendix The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 6, and Description of Adaptations of Alignment Methods ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4, 5, 6. The citations of the alignment methods can be found in the appendix (with references to the implementations). ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** 4 ✗ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? If we would publish them, we would hurt our anonymity. We will provide the annotation schema upon acceptance. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? 4 ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? 4 (part of annotation schema) ✗ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No ethic board required. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? 4
li-etal-2023-neural
A Neural Divide-and-Conquer Reasoning Framework for Image Retrieval from Linguistically Complex Text
https://aclanthology.org/2023.acl-long.909
Pretrained Vision-Language Models (VLMs) have achieved remarkable performance in image retrieval from text. However, their performance drops drastically when confronted with linguistically complex texts that they struggle to comprehend. Inspired by the Divide-and-Conquer algorithm and dual-process theory, in this paper, we regard linguistically complex texts as compound proposition texts composed of multiple simple proposition sentences and propose an end-to-end Neural Divide-and-Conquer Reasoning framework, dubbed NDCR. It contains three main components: 1) Divide: a proposition generator divides the compound proposition text into simple proposition sentences and produces their corresponding representations, 2) Conquer: a pretrained VLMs-based visual-linguistic interactor achieves the interaction between decomposed proposition sentences and images, 3) Combine: a neural-symbolic reasoner combines the above reasoning states to obtain the final solution via a neural logic reasoning approach. According to the dual-process theory, the visual-linguistic interactor and neural-symbolic reasoner could be regarded as analogical reasoning System 1 and logical reasoning System 2. We conduct extensive experiments on a challenging image retrieval from contextual descriptions data set. Experimental results and analyses indicate NDCR significantly improves performance in the complex image-text reasoning problem.
# A Neural Divide-And-Conquer Reasoning Framework For Image Retrieval From Linguistically Complex Text Yunxin Li1**, Baotian Hu**1∗ , Yuxin Ding1, Lin Ma2**, Min Zhang**1 1Harbin Institute of Technology, Shenzhen, China, 2Meituan, Beijing {hubaotian, yxding, zhangmin2021}@hit.edu.cn liyunxin987@163.com, forest.linma@gmail.com ## Abstract ![0_Image_0.Png](0_Image_0.Png) Pretrained Vision-Language Models (VLMs) have achieved remarkable performance in image retrieval from text. However, their performance drops drastically when confronted with linguistically complex texts that they struggle to comprehend. Inspired by the Divide-andConquer (Smith, 1985) algorithm and dualprocess theory (Groves and Thompson, 1970), in this paper, we regard linguistically complex texts as compound proposition texts composed of multiple simple proposition sentences and propose an end-to-end Neural Divide-andConquer Reasoning framework, dubbed NDCR. It contains three main components: 1) *Divide*: a proposition generator divides the compound proposition text into simple proposition sentences and produces their corresponding representations, 2) *Conquer*: a pretrained VLMsbased visual-linguistic interactor achieves the interaction between decomposed proposition sentences and images, 3) *Combine*: a neuralsymbolic reasoner combines the above reasoning states to obtain the final solution via a neural logic reasoning approach. According to the dual-process theory, the visual-linguistic interactor and neural-symbolic reasoner could be regarded as analogical reasoning System 1 and logical reasoning System 2. We conduct extensive experiments on a challenging image retrieval from contextual descriptions data set. Experimental results and analyses indicate NDCR significantly improves performance in the complex image-text reasoning problem. Code link: https://github.com/ YunxinLi/NDCR. ## 1 Introduction Image-text retrieval tasks have made remarkable progress owing to pretrained Vision-Language Models (VLMs) such as LXMERT (Tan and Bansal, 2019), UNITER (Chen et al., 2020), OSCAR (Li et al., 2020b; Zhang et al., 2021), ViLBERT (Lu et al., 2019), CLIP (Radford et al., 2021), ∗ Corresponding author. and many others. These VLMs are usually trained on the large-scale short text-image corpus by crossmodal semantic alignment methods. They are capable of essential perceptual computing capability and excel at retrieving images from sentences with few objects and simple linguistic, e.g., *"There is* a duck swimming in the pond". However, when pretrained VLMs meet the case of retrieving the accurate image from similar candidates based on a linguistically complex text, as the example shown in Figure 1, previous works (Krojer et al., 2022; Talmor et al., 2021a; Thrush et al., 2022) show that they struggle to understand the elaborate description and perform complex cross-modal reasoning. According to the dual-process theory for human thinking (Groves and Thompson, 1970; Evans, 2003; Pelaccia et al., 2011), human brains contain two thinking systems: System 1 performs analogical reasoning well, which is fast yet unconscious; System 2 is capable of abstract logical reasoning, which is slow yet conscious and well-suitable for complex reasoning problems. The theory could 16464 also hold for the image-text retrieval tasks, and the widely adopted models (e.g., VLMs) focus on analogical reasoning as System 1 based on the analysis of deep learning networks (Bengio, 2017, 2019; Bengio et al., 2021). For the linguistically complex description that contains multiple conditions, they have inferior performance, and we need to introduce logical reasoning System 2 more to cover and logically incorporate the scattered information in the description based on System 1. Inspired by the above investigations and classical Divide-andConquer (Smith, 1985) algorithm, we design an end-to-end Neural Divide-and-Conquer Reasoning framework named NDCR. As shown in Figure 1, our key idea is to regard the complex description as compound proposition text and solve the challenging retrieval problem in three steps: divide, conquer, and combine. Specifically, **Divide:** NDCR first utilizes a proposition generator to divide the complex compound text and produce the global representation of simple proposition sentences with visually printing them. **Conquer:** we devise a visuallinguistic interactor to achieve the interaction between decomposed proposition sentences and images, which resembles System 1. It uses the Transformer (Vaswani et al., 2017)-based contextual interactor to achieve the inter-learning of different proposition-image pairs. Considering the incorrectness or information loss of simple proposition representation, we also present a modifier to incorporate the context reasoning information to improve their cross-modal reasoning states. **Combine:** we design a learnable neural-symbolic reasoner to integrate reasoning information of simple propositions logically. It first employs a negation executor to obtain a simple proposition sentence's negational reasoning hidden state and corresponding confidence score. Then, we use the global reasoning information of compound proposition text as the query signal to perform the conjunction operation across simple propositions and their negational information. Finally, as shown in Figure 1, we also combine the inferred results of the neural-symbolic reasoner (resembles System 2) and visual-linguistic interactor (resembles System 1) to obtain the final solution. In this way, the whole framework integrate the capabilities of Systems 1 and 2 to obtain better performance. data set, IMAGECODE (Krojer et al., 2022). The experimental results indicate that NDCR achieves the state-of-the-art performance and the ablation and case studies verify the effectiveness of different modules. Our contributions are as follows: - We propose a divide-and-conquer reasoning framework for image retrievals from linguistically complex text, where we first attempt to combine the perceptually analogical reasoning System 1 and neural-symbolic logic reasoning System 2 to solve the complex multimodal reasoning problem. - We design a proposition generator capable of producing the global representation of decomposed simple proposition sentences for linguistically complex texts and visually printing them as text. - Experimental results indicate our approach remarkably improves the performance, and we obtain the first place on the leaderboard 1. Ablation and case studies confirm the effectiveness of introducing and combining logical reasoning System 2 based on System 1. ## 2 Related Works Pretrained Vision-Language Models for Cross Modal Matching. Owing to the success of Transformer (Vaswani et al., 2017) architecture equipped with pretrain-finetuning (Erhan et al., 2010) learning method, pretrained VLMs have made a remarkable performance in cross-modal matching or reasoning tasks (Talmor et al., 2021b), especially image-text retrieval. Early pretrained VLMs utilize BERT (Devlin et al., 2019)-like single encoder architecture to encode and fuse the image-text information, then perform image-text reasoning such as ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019), and Oscar (Li et al., 2020b). In addition, dual-encoder architecture such as CLIP (Radford et al., 2021), and ALBERT (Li et al., 2021), performs better than single-encoder architecture on image-text matching tasks and is widely used in industry because of its efficiency. Divide-and-Conquer for Question Answering. The divide-and-conquer algorithm (Smith, 1985) We conduct extensive experiments on a largescale image retrieval from contextual descriptions 1https://mcgill-nlp.github.io/imagecode ![2_image_0.png](2_image_0.png) aims to divide the complex problem into multiple simple problems and then combine the subproblem results to achieve the final solution. This idea has been used in complex question-answering tasks in the natural language processing area. Zhang et al. (2019) proposed to utilize the decomposition of complex questions for semantic parsing. Min et al. (2019) adopt the question decomposition and rescoring method to perform multi-hop reading comprehension, which makes the reasoning path interpretable and robust. Wolfson et al. (2022) utilized the QDMR structures of complex questions to conduct the decompose-synthesize text-toSQL transformation. Previous pipeline approaches may lead to error cascades in the upper inference process due to the incompleteness or error of decomposed text. The image-text retrieval task has strict requirements on the correctness of text semantic understanding, thus we propose an end-to-end divide-and-conquer method for alleviating the error cascade issue via the whole learning process. Dual-Process Theory. The dual-process theory shows that human brains have two different thinking Systems. System 1 performs analogical reasoning, and System 2 performs conscious logical reasoning. Combining this theory with practical tasks, some researchers designed various approaches. Mittal et al. (2017) believed that combining vector space models with external knowledge graphs could be regarded as thinking 'fast' in vector space along with thinking 'slow' and 'deeply' by reasoning over the knowledge graph. Anthony et al. (2017) also proposed to use a deep learning network with a tree search engine as System 1 and System 2, respectively, for sequential decisionmaking problems. Bengio (2017, 2019) advocated the design of a conscious network to achieve the leap from System 1 to System 2. Liu et al. (2022) designed a neural-symbolic system for natural language understanding tasks, which combines the explicit symbolic calculation-based System 2 and fast deep learning network-based System 1. For complex multi-modal reasoning problem, e.g., image retrieval from linguistically complex text, humans usually combine System 1 and System 2 to obtain the final solution. However, current methods relying mainly on deep learning networks resemble System 1 and lack the logical reasoning capability, thus suffering from image-text reasoning with the complex description. In this light, we make the first attempt to combine System 1 and System 2 to tackle this issue by designing a neural divideand-conquer reasoning framework. We introduce a neural-symbolic reasoner in System 2 to conduct the logical operation. The overall framework contains analogical and logical reasoning as humans think, making appreciable gains. ## 3 Method 3.1 Overview Image retrieval from contextual descriptions (Krojer et al., 2022) aims to infer the correct image given a linguistically complex text Y = (y1*, ..., y*N ) and similar images I = (I1*, ..., I*L), where yi, N, Ii, and L represent the i th token, the total length of text, i th image, and the number of images, respectively. We propose a novel divideand-conquer reasoning framework to tackle such a task. It consists of three components, namely, Proposition Generator, Visual-Linguistic Interactor, and Neural-Symbolic Reasoner, which are coupled and trained in an end-to-end manner. Specifically, the proposition generator divides the complex description into multiple proposition sentences, allowing it to convert the complex matching problem to simple ones. Afterwards, the visual-linguistic interactor achieves the interaction between decomposed proposition sentences and images, resembling System 1, to perform the essential analogical reasoning. Subsequently, the neural-symbolic reasoner that relies on the reasoning state output by the visual-linguistic interactor resembles System 2 to perform logical reasoning. Finally, we also combine the output results of System 1 and System 2 to obtain the final solution. ## 3.2 Proposition Generator The proposition generator is a sequence-tosequence model based on the pretrained language model BART. As shown in Figure 2, it employs the encoder to obtain the text representation HY = (hcls, hy1 , ..., hyN ) where hyi represents the i th token hidden state. Subsequently, we design a two-layer semantic parsing module to gain the global representation of simple proposition sentences. Concretely, we set the maximum number of simple propositions to 10 and randomly initialize them. The initial vectors are fed to the semantic parsing module to interact with the compound text representation. Take the first layer as an example; the calculation process is following, $$\begin{array}{c}{{\mathbf{h}_{s}^{T}=\mathrm{Self-Attention}(\mathbf{h}^{I}),}}\\ {{\mathbf{h}_{c}^{T}=\mathrm{Cross-Attention}(\mathbf{h}_{s}^{T},\mathbf{H}_{Y}),}}\\ {{\mathbf{h}_{F}^{T}=\mathrm{FNN}(\mathbf{h}_{c}^{T}-\mathbf{h}_{s}^{T}),}}\end{array}\quad\quad(1)$$ where h Iis the randomly initial proposition representations. Attention and FNN calculation subnetworks are identical to the transformer (Vaswani et al., 2017) architecture. Different from the transformer, we let the output of Cross-Attention layer subtract the output of Self-Attention layer, aiming to achieve information differences across propositions. By doing the same two-layer calculation, we obtain ten global hidden states of simple propositions. Due to context containing different numbers of simple proposition, we use a MLP to predict the target number of simple proposition sentences. It only attends to the global hidden state hcls of compound proposition text. Suppose that the predicted number M of simple propositions is 3 (same as Figure 2), we adopt the first-three hidden states of the semantic parsing module as the global representation of the targeted simple proposition. As shown in Figure 2, for explaining what simple propositions represent, we also use the decoder of BART to generate the simple proposition sentence with only attending to their global representations. ## 3.3 System 1: Visual-Linguistic Interactor After obtaining the global representations of simple proposition sentences, we introduce the visuallinguistic interactor to mine the interaction of image-proposition pairs. Specifically, we use a pretrained visual encoder to obtain the image encoding representations HI = (hI1 , ..., hIL ) and fuse them with the simple proposition representation via the dot-product way (as the "F" shown in Figure 2). The two-modal fusion process is H(p) = λ · Norm(P) · Norm(HI ), where λ is the hyperparameter set to enlarge the scale of fused vectors. We denote the fused sequence representation of proposition-image pairs to H(p) = (H(p1)*, ...,* H(pM)) where H(p1) indicates the sequential representation of first proposition combined with images. Then, we employ a two-layer transformer to perform the contextual information interaction for fused sequential representations H(p) and obtain the initial reasoning states of simple proposition on images. Considering the incorrectness or information loss of simple proposition representation obtained by the proposition generator, we introduce a MLP-based modifier to incorporate the reasoning state of compound proposition text to enhance previous initial reasoning states of simple propositions. The whole process is performed as Eq. 2, $$\begin{array}{c}{{\mathbf{H}_{P}^{S_{1}}=\mathrm{Transformer}(\mathbf{H}(p)+P E),}}\\ {{\mathbf{H}_{C}^{s g}=\mathrm{Transformer}(\mathbf{H}_{C}+P E),}}\\ {{\mathbf{H}^{S_{1}}=\mathbf{W}^{M_{1}}\mathrm{ReLU}(\mathbf{W}^{M_{2}}[\mathbf{H}_{P}^{S_{1}},\mathbf{H}_{C}^{s g}]),}}\end{array}\tag{2}$$ where HC indicates the fusion information of the compound proposition text and images, gained by the cross-modal encoder (arr. cross encoder as shown in Figure 2). WM1 ∈ R 2d×dand WM2 ∈ R 2d×2dare learnable parameters. Before feeding H(p) into the transformer, we introduce the learnable position embeddings P E to facilitate it pay attention to the contextual information across images. After obtaining the final reasoning state HS1 = (h + 1 , ..., h + M) of simple propositions 16467 ![4_image_0.png](4_image_0.png) in System 1, we adopt a linear prediction head to produce the confidence score of each proposition to images, which are defined as P S1 = (p + 1 , ..., p+M) and p + M ∈ R 1×L. ## 3.4 System 2: Neural-Symbolic Reasoner For complex reasoning problems, the logical reasoning process usually plays a more significant role for intelligent machines and human reasoning (Bengio, 2019), which the visual-linguistic interactor is not capable of. Instead of combining the inferring results in System 1 via rule-based methods such as mean pooling, inspired by Shi et al. (2020); Chen et al. (2021), we devise a learnable NeuralSymbolic Reasoner (NSR) to perform logical reasoning based on System 1 as shown in Figure 2. As depicted in Figure 3, it contains a negation executor to obtain the negational reasoning states and a conjunction operation to acquire the result of logical reasoning with attention to the positive and negational reasoning information. Negation Executor. The negation executor is a module that takes the reasoning state of a simple proposition as input and produces the corresponding reasoning state of its negation as output. Its aim is to obtain useful cross-modal reasoning states for the negation of a proposition. We regard HS1 as the positive reasoning state and use a two-layer MLP with the ReLU activation function to obtain the negational reasoning state. The calculation process is given in Eq. 3, $${\bf NEG}({\bf H}^{S_{1}})=W_{2}^{n}{\bf ReLU}(W_{1}^{n}{\bf H}^{S_{1}}+b_{1}^{n})+b_{2}^{n},\tag{3}$$ where Wn 2 , Wn 1 ∈ R d×d, b n 1 , bn 2 ∈ R 1×dare learnable parameters. We define the output of negation executor to HN = (h − 1 , ..., h −M), contrast to HS1.The negational proposition has a different cross-modal reasoning state HN than the corresponding positive proposition HS1. We use the same linear prediction head as System 1 to produce the corresponding confidence score on images, which are presented to P N = (p − 1 , ..., p−M). To make the negation executor effective, we will define a negational feedback loss to locally optimize it. Conjunction Operation. Firstly, we define a new joint representation that incorporates reasoning hidden states and corresponding confidence scores as the initial state of conjunction operation. The process is presented in Eq. 4, $$\begin{array}{c}{{\mathbf{P}_{i}^{+}=\mathrm{Softmax}(p_{i}^{+})\cdot\mathbf{H}_{I},i=1,...,M,}}\\ {{\mathbf{H}_{p_{i}^{+}}^{m s}=[\mathbf{P}_{i}^{+},\mathbf{h}_{i}^{+}],i=1,...,M,}}\end{array}\tag{4}$$ where [, ] indicates the concat calculation and HI is the representation of images. Hns p + i represents the positive joint representation of i th proposition. We use the same calculation method as Eq. 4 to obtain the initialized negational representation Hns p − i . Then, we utilize the reasoning state of compound proposition text H sg C (Eq. 2) as the signal to drive the conjunction calculation via the method of multi-head attention equipped with gate fusion, as shown in Figure 3. The whole calculation process is presented in Eq. 5, $$\begin{array}{l}{{{\bf H}^{+}=\mathrm{MultiHead}(W^{s}{\bf H}_{C}^{s g},{\bf H}_{p_{i}^{+}}^{n s}),}}\\ {{{\bf H}^{-}=\mathrm{MultiHead}(W^{s}{\bf H}_{C}^{s g},{\bf H}_{p_{i}^{-}}^{n s}),}}\\ {{\quad g^{+}=W^{g}[{\bf H}^{+},W^{s}{\bf H}_{C}^{s g}]+b^{g},}}\\ {{\quad g^{-}=W^{g}[{\bf H}^{-},W^{s}{\bf H}_{C}^{s g}]+b^{g},}}\\ {{{\bf H}^{f}=W^{S_{2}}(g^{+}{\bf H}^{+}+g^{-}{\bf H}^{-}),}}\end{array}\tag{5}$$ $${\mathrm{where~}}W^{s}\ \in\ \mathbb{R}^{2d\times2d},\ W^{g}\ \in\ \mathbb{R}^{1\times4d},\ W^{S_{2}}\ \in$$ R 2d×dare the learnable parameters and Hf ∈ R 1×L×d. We also utilize another linear prediction head to obtain the final confidence score of neural-symbolic reasoner, which is defined as P S2 ∈ R 1×L. ## 3.5 Combining System 1 And System 2 In addition, we combine inferring confidence scores in System 1 and System 2 to obtain the final solution, achieving the complementarity of System 1 and System 2. First, we need to acquire the whole representation of HS1 and Hfas follows: $$\begin{array}{c}{{{\bf H}_{W}^{f}=(W^{l}{\bf H}^{f}+b^{l})^{T}{\bf H}^{f},}}\\ {{{\bf H}_{W}^{S_{1}}=(W^{l}{\bf H}^{S_{1}}+b^{l})^{T}{\bf H}^{S_{1}},}}\end{array}$$ $$(6)$$ where Wl ∈ R d×1, b lare learnable parameters. HS1 W = (h + w1 , ..., h + wM) ∈ RM×dand H f W ∈ R 1×dare used to gain the final solution via Eq. 7, hc =P M j=0 (WaH f W + Wbh + wj + b c), Sˆj = V (WaH f W + Wbh + wj + b c) + b v, (7) sig = f(Wf[H f W , hc] + b f), P f = sig · (P M j=0 Sˆjp + j ) + (1 − sig) · P S2, where Wa, Wb ∈ R d×d, b c ∈ R d, V ∈ R d×1, Wf ∈ R 2d×d, b v, bf ∈ R 1are learnable parameters and f(.) indicates the sigmoid activation function. This way, we can obtain the final result via taking the maximum one of the confidence score P f ∈ R 1×L. ## 3.6 Training Strategies To make the proposition generator perform proposition decomposition and generation effectively, we train it on a large-scale corpus solely and then train the whole NDCR framework on the specific training data. The two training phases are as follows: Phase 1. We first pretrain the proposition generator on the released large-scale complex text simplification data set MinWikiSplit (Niklaus et al., 2019), which is composed of 203K pairs of aligned complex source and simplified target sentences. We adopt the cross entropy generation loss Lg for the decoder output. Similar to SimCSE (Gao et al., 2021), we employ the contrastive learning loss Lc to make the global representation of simple proposition sentence different. In addition, we use a crossentropy multi-label classification loss Lp to train the prediction head of numbers of propositions, where the label is the number of simple sentences in the pretraining corpus. The whole training loss: $${\mathcal{L}}_{p h r a s e1}={\mathcal{L}}_{g}+{\mathcal{L}}_{c}+{\mathcal{L}}_{p}.$$ Phase 2. While training NDCR, we employ the proposition sentence-image confidence score to calculate the classification loss. The loss will cover the output of System 1, System 2 and final solution, which is defined as follows: $${\mathcal{L}}_{m a t c h}=\sum_{i=0}^{M+2}\mathrm{cross-entropy}(p_{i},q),\quad\quad(9)$$ where pi ∈ R 1×L and q is the golden label. To make the negation executor effective, we devise a negational feedback loss Lneg to optimize it. We take the prediction result of modifier in System 1 as the positive distribution and make the belief distribution output by the negation executor on the image candidates be far away from positive distribution. The loss calculation method is shown in Eq. 10, $${\mathcal{L}}_{n e g}=\sum_{z=0}^{M}\operatorname*{max}(\theta-\operatorname{KL}(p_{z}^{-},p_{z}^{+}),0.0),\quad(10)$$ where KL indicates the K-L Divergence (Kullback and Leibler, 1951). θ is a super-parameter used to expand the positive and negational interval, which is set to 0.2. Hence, the whole optimization target is Lmatch + Lneg. ## 4 Experiments 4.1 Dataset We conduct extensive experiments on a challenging data set IMAGECODE (Krojer et al., 2022), which contains 94,020 images, and they are divided into 9,402 sets. The overall images are collected from four released data sets: MSR-VTT (Xu et al., 2016), Video-Storytelling (Li et al., 2020a), YouCook (Das et al., 2013), and Open Images V6 (Kuznetsova et al., 2020). It consists of 21,202 human-writing complex descriptions and manually labelling corresponding golden images, which are divided into 16,594, 2,302, and 2,306 for training, validating, and testing, respectively. The image sources in the overall data set include video frames and static images. ## 4.2 Baselines We compare NDCR with various types of pretrained VLMs and other designed models based on the specific condition of this task. Specifically, ViLBERT (Lu et al., 2019) is a cross encoder where language and vision interact in the transitional layer via cross attention calculation. CLIP (Radford et al., 2021) is a two-stream vision-language encoder with two independent visual and textual encoders. UNITER (Chen et al., 2020) is a singlestream encoder where visual representations and 16469 text tokens are concatenated and interact via the same transformer. OFA (Wang et al., 2022) is a unified cross-modal and unimodal encoder and has achieved impressive performance on multiple cross modal reasoning tasks. Krojer et al. (2022) also designed a contextual module to improve the interaction across different text-image fusion representations, achieving state-of-the-art performance. ## 4.3 Implementation Details The L, λ, and d equal 10, 1000, and 512, respectively. For the proposition generator, we adopt a two-layer semantic parsing module and the pretrained parameters of BART-base version. We set the maximum number of propositions to 10 and trained the proposition generator for 15 epochs on the MinWikiSplit data set. In addition, we set the depth of transformer block to 2 in the visuallinguistic interactor and utilized the finetuned visual encoder of CLIP (ViT-B/16) to encode images. For the cross encoder, we adopt the OFA-large architecture and first finetune it for two epochs before training the overall structure of NDCR. We froze the cross encoder, proposition generator, and visual encoder to prevent overfitting while training NDCR. While training all models, we set the batch size, initial learning rate, and dropout rate to 36, 6 × 1e−5, and 0.1, respectively. The maximum training epoch is set to 30, and we employ the Adam Optimizer (Kingma and Ba, 2014) with the initial learning rate declining linearly to train all models. We use the validation set to select the best-performing model. ## 4.4 Main Results Overall Performance. We present the performance of NDCR and comparative models on the test set in Table 1. '†' indicates that the pretrained VLMs are equipped with the contextual module and temporal embedding to enhance the contextual semantic interaction across similar images. This variant shows its effectiveness on the case of video frame according to the comparative performances such as CLIP vs. CLIP†. Table 1 reports that the proposed method achieves new state-of-the-art performance on the whole test set and significantly outperforms previous strong baseline (34.1 vs. 29.9, ↑ 4.2). NDCR improves performances both on video frames and static images, especially static images(↑ 4.3), which shows its generalization on different cases. We observe that all models perform poorly on the testing samples whose images are from the | Method ↓ Type → | All | Video | Static | |--------------------------------|-------|---------|----------| | CLIP (Radford et al., 2021) | 28.4 | 20.0 | 60.0 | | CLIP† (Krojer et al., 2022) | 29.9 | 22.0 | 59.8 | | UNITER (Chen et al., 2020) | 24.8 | 17.4 | 52.8 | | UNITER† (Krojer et al., 2022) | 25.7 | 19.1 | 50.5 | | ViLBERT (Lu et al., 2019) | 20.9 | 15.0 | 42.7 | | ViLBERT† (Krojer et al., 2022) | 24.5 | 18.0 | 49.3 | | NDCR (ours) | 34.1 | 26.1 | 64.3 | Method ↓ **Type** → All Video Static OFA (Wang et al., 2022) 29.0 22.1 54.8 OFA† 30.0 23.6 54.6 CLIP (Radford et al., 2021) 27.4 19.7 56.5 CLIP†(Krojer et al., 2022) 27.6 20.8 53.2 NDCR (ours) **32.8 25.7** 59.2 System 2 32.4 25.3 **59.3** System 2 w/o Negation 32.0 25.3 57.3 System 1 31.6 24.5 58.3 System 1 w/o Modifier 19.3 16.4 30.3 video clips, which may be attributed to the high similarity across video frames. Hence, there is a big room to improve the whole performance on the challenging multi-modal reasoning task. ## 4.5 Ablation Study Effectiveness of Modules. To study the effectiveness of different modules, we re-annotate the test sample with the help of eight related workers (original test labels are not released). The experimental results are presented in Table 2. The performances of reproduced baselines and NDCR have a slight decline, which is because the labelling process for most examples is difficult. There are specific quality differences across human-labelling results, yet it does not affect testing and comparing model perfor- | Method ↓ Nums_of_props → | 1 | 2 | 3 | 4 | 5 | |----------------------------|-----|-----|------|-----|-----| | Total Number | 61 | 863 | 1239 | 126 | 16 | | CLIP† | 27 | 264 | 302 | 41 | 3 | | OFA† | 28 | 299 | 340 | 34 | 5 | | System 1 | 30 | 297 | 364 | 36 | 4 | | System 2 | 31 | 304 | 373 | 36 | 3 | | NDCR | 31 | 304 | 380 | 37 | 3 | | ∆ | 3 | 5 | 40 | 2 | -2 | mances. For the fairness of model comparison, the random seeds of all ablation experiments are set to the same value 10. Firstly, NDCR achieves the best performance and significantly surpasses other models on two test sets. When we add System 2 based on System 1, the overall performance improves by about 1.0, suggesting the neural-symbolic reasoner's effectiveness. Comparing System 2 and System 2 w/o negation, we observe that the negation executor improves the performance of the neural-symbolic reasoner, mainly in the case of static images. In addition, comparing System 1 and System 1 w/o modifier, we observe that introducing the context reasoning information is a very useful way to enhance the reasoning state representation of decomposed simple proposition sentences. Compared to the best baseline OFA-large (470M), the total parameter size of NDCR is about 440M. NDCR has fewer parameters yet significantly outperforms it (as shown in Table 2). This suggests that the overall performance improvement of NDCR is not due to having larger parameters. System 1 vs. System 2. We count the experimental results on the test set according to the number of simple proposition sentences into which compound proposition texts are divided. The results are shown in Table 3. The statistical results show that NDCR excels at image retrieval from complex text with medium length, especially for those containing three simple proposition sentences. It verifies the proposed method's effectiveness in handling the complex image-text reasoning problem. Compared to System 1, System 2 performs better on test samples containing 2 or 3 simple proposition sentences, which suggests that the neural-symbolic reasoner can improve the conjunction operation of predic- ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) tion results of decomposed propositions compared to rule-based methods such as mean pooling. ## 4.6 Case Study We present two cases in Figure 4 and 5. For the first case (Figure 4), the proposition generator divides the complex text into three proposition sentences, and System 1 inferred the confidence scores (P + 1,2,3 ) of them to ten images. Although these results of simple proposition sentences contain some errors due to having no explicit supervision signal to train, System 2 (neural-symbolic reasoner) could obtain the correct result with logical reasoning operation compared to the rule-based aggregation method in System 1. It indicates the robustness of System 2. In addition, we observe that the pretrained VLMs and System 1, which are capable of perceptual computing, often fail to cover all text semantics. It is easy for them to ignore pivotal text information (such as "there is no text" shown in Figure 5), which leads to inference errors. In conclusion, combining logical reasoning System 2 and powerful analogical reasoning System 1 (e.g., pretrained VLMs) has significant potential to take their advantages to address complex reasoning problems. ## 5 Conclusion In this paper, inspired by the divide-and-conquer algorithm and dual-process theory, we introduced an end-to-end neural divide-and-conquer reasoning framework named NDCR to handle the challenging case of image retrievals from linguistically complex text. NDCR contains a proposition generator to divide the compound proposition text into multiple simple proposition sentences, then uses a visuallinguistic interactor to achieve the interaction of simple propositions and images. To improve the logical reasoning capability, we devise a neuralsymbolic reasoner to gain the logical inferring result based on the output of the visual-linguistic interactor. This way, NDCR performs the lowlevel analogically perceptual computing in System 1 (visual-linguistic interactor) and high-level logical reasoning in System 2 (neural-symbolic reasoner). Finally, we combine the output result in Systems 1 and 2 to obtain the final solution. ## Limitations The proposed method NDCR has some limitations as follows: 1) The produced representation of simple proposition sentences in the proposition generator lies in a different space distribution with the image encoding, which affects the performance of their fused representation. Although we introduce the reasoning information of compound proposition text to alleviate this issue, we hope to solve it by improving the text understanding capability of pretrained VLMs. In addition, adopting the pretrained textual encoder of VLMs to perform proposition decomposition is inadequate due to that they present an inferior understanding for the discourse structure of long texts. 2) The performance of samples with highly similar images from video frames is quite different from that of humans. We may improve it from the perspective of image difference modelling. 3) The experimental results indicate that our method is effective at logical inference on examples with medium-length descriptions, but there is still room for improvement for longer descriptions. ## Ethics Statement IMAGECODE (Krojer et al., 2022) is an open data set used for scientific research. For ablation studies in the test set, we hired masters and undergraduate students from the research group to re-annotate the label of the test set. We have informed the creators of the data set and only conducted scientific research. ## Acknowledge We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of Natural Science Foundation of China (No.62006061, 61872107) and the Stable Support Program for Higher Education Institutions of Shenzhen (No.GXWD2020123015542700320200824155011001). ## References Thomas Anthony, Zheng Tian, and David Barber. 2017. Thinking fast and slow with deep learning and tree search. *Advances in Neural Information Processing* Systems, 30. Yoshua Bengio. 2017. The consciousness prior. *arXiv* preprint arXiv:1709.08568. Yoshua Bengio. 2019. From system 1 deep learning to system 2 deep learning. In *NeurIPS 2019*. Yoshua Bengio, Yann Lecun, and Geoffrey Hinton. 2021. Deep learning for ai. Communications of the ACM, 64(7):58–65. Hanxiong Chen, Shaoyun Shi, Yunqi Li, and Yongfeng Zhang. 2021. Neural collaborative reasoning. In *Proceedings of the Web Conference 2021*, pages 1516– 1527. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *ECCV*. Pradipto Das, Chenliang Xu, Richard F. Doell, and Jason J. Corso. 2013. A thousand frames in just a few words: Lingual description of videos through latent topics and sparse object stitching. In *2013 IEEE Conference on Computer Vision and Pattern Recognition*, pages 2634–2641. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? *Journal of Machine Learning* Research, 11(19):625–660. Jonathan St BT Evans. 2003. In two minds: dualprocess accounts of reasoning. Trends in cognitive sciences, 7(10):454–459. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Philip M Groves and Richard F Thompson. 1970. Habituation: a dual-process theory. *Psychological review*, 77(5):419. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980. Benno Krojer, Vaibhav Adlakha, Vibhav Vineet, Yash Goyal, Edoardo Ponti, and Siva Reddy. 2022. Image retrieval from contextual descriptions. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 3426–3440, Dublin, Ireland. Association for Computational Linguistics. Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. *The annals of mathematical statistics*, 22(1):79–86. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. 2020. The open images dataset v4. *International Journal of Computer Vision*, 128(7):1956– 1981. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. In *NeurIPS*. Junnan Li, Yongkang Wong, Qi Zhao, and Mohan S. Kankanhalli. 2020a. Video storytelling: Textual summaries for events. *IEEE Transactions on Multimedia*, 22(2):554–565. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020b. Oscar: Object-semantics aligned pre-training for vision-language tasks. Zhixuan Liu, Zihao Wang, Yuan Lin, and Hang Li. 2022. A neural-symbolic approach to natural language understanding. *arXiv preprint arXiv:2203.10557*. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Multi-hop reading comprehension through question decomposition and rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6097–6109, Florence, Italy. Association for Computational Linguistics. Sudip Mittal, Anupam Joshi, and Tim Finin. 2017. Thinking, fast and slow: Combining vector spaces and knowledge graphs. *arXiv preprint* arXiv:1708.03310. Christina Niklaus, André Freitas, and Siegfried Handschuh. 2019. MinWikiSplit: A sentence splitting corpus with minimal propositions. In Proceedings of the 12th International Conference on Natural Language Generation, pages 118–123, Tokyo, Japan. Association for Computational Linguistics. Thierry Pelaccia, Jacques Tardif, Emmanuel Triby, and Bernard Charlin. 2011. An analysis of clinical reasoning through a recent and comprehensive approach: the dual-process theory. *Medical education online*, 16(1):5890. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *Proceedings of the 38th International* Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR. Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang, and Yongfeng Zhang. 2020. Neural logic reasoning. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1365–1374. Douglas R Smith. 1985. The design of divide and conquer algorithms. *Science of Computer Programming*, 5:37–58. Alon Talmor, Ori Yoran, Ronan Le Bras, Chandrasekhar Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021a. Commonsenseqa 2.0: Exposing the limits of ai through gamification. In NeurIPS Datasets and Benchmarks. Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, and Jonathan Berant. 2021b. Multimodal{qa}: complex question answering over text, tables and images. In *International Conference on* Learning Representations. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100–5111, Hong Kong, China. Association for Computational Linguistics. Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. 2022. Winoground: Probing vision and language models for visio-linguistic compositionality. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 5238– 5248. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 23318–23340. PMLR. Tomer Wolfson, Daniel Deutch, and Jonathan Berant. 2022. Weakly supervised text-to-SQL parsing through question decomposition. In *Findings of the* Association for Computational Linguistics: NAACL 2022, pages 2528–2542, Seattle, United States. Association for Computational Linguistics. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5288–5296. Haoyu Zhang, Jingjing Cai, Jianjun Xu, and Ji Wang. 2019. Complex question decomposition for semantic parsing. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4477–4486, Florence, Italy. Association for Computational Linguistics. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Making visual representations matter in vision-language models. ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? limitations ✓ A2. Did you discuss any potential risks of your work? section 4.5 and limitations ✓ A3. Do the abstract and introduction summarize the paper's main claims? Abstract and 1 introduction ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** Comparative Models In The Experiment Section. ✓ B1. Did you cite the creators of artifacts you used? experiment ✗ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? All dataset and comparative models are open and can be used in research purpose. ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? experiments ✗ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? we do not create any data set and conduct experiments on released public data set. ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? we present comparative models and datasets in the experiments section. ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. section 4.1 data sets ## C ✓ **Did You Run Computational Experiments?** Experiment ✗ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? all models are run on the same device. we report the used version of pretrained models in the paper. The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? implementation details ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? result analysis: table 1, 2, and 3. ✗ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? our experiments do not use preprocessing tools and run all models on the python environment. ## D ✓ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?** Ethics Statement Or Ablation Study D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank. ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? ethics statement ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? ethics statement D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. we do not collect the new data and only re-annotate the label of test to conduct the ablation studies. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? masters from the research group; in ethics statement
gao-etal-2023-rarr
{RARR}: Researching and Revising What Language Models Say, Using Language Models
https://aclanthology.org/2023.acl-long.910
Language models (LMs) now excel at many tasks such as question answering, reasoning, and dialog. However, they sometimes generate unsupported or misleading content. A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence. To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model, and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible. When applied to the output of several state-of-the-art LMs on a diverse set of generation tasks, we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models. Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.
# Rarr: Researching And Revising What Language Models Say, Using Language Models Luyu Gao1⋄∗ Zhuyun Dai2∗ Panupong Pasupat2∗ **Anthony Chen**3⋄∗ Arun Tejasvi Chaganty2∗ Yicheng Fan2∗ Vincent Y. Zhao2 **Ni Lao**2 Hongrae Lee2 Da-Cheng Juan2 **Kelvin Guu**2∗ 1Carnegie Mellon University, 2Google Research, 3UC Irvine luyug@cs.cmu.edu anthony.chen@uci.edu {zhuyundai,ppasupat,arunchaganty,yichengfan,vzhao,nlao,hrlee,dacheng,kguu}@google.com ## Abstract Language models (LMs) now excel at many tasks such as question answering, reasoning, and dialog. However, they sometimes generate unsupported or misleading content. A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for *attribution* to external evidence. To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (*Retrofit Attribution using Research and* Revision), a system that 1) automatically finds attribution for the output of any text generation model, and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible. When applied to the output of several state-of-the-art LMs on a diverse set of generation tasks, we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models. Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.1 ## 1 Introduction Generative language models (LMs) and other text generation models are now the backbone of many AI systems. For example, large language models can perform multi-step reasoning (Nye et al., 2021; Wei et al., 2022), generate plans (Ahn et al., 2022), use tools and APIs (Shin et al., 2021; Thoppilan et al., 2022), and answer open-domain questions (Petroni et al., 2019; Roberts et al., 2020). Despite these incredible advances, state-of-theart LMs still frequently produce biased, misleading, ∗Lead contributors. Please see Contributions section for details. ⋄Work done during an internship at Google Research. 1We release open-source implementations of RARR, the evaluation pipeline, and the evaluation sets at https:// github.com/anthonywchen/RARR. ![0_image_0.png](0_image_0.png) Figure 1: The *Editing for Attribution* **task.** The input x is a text passage produced by a generation model. Our *Research & Revision* model outputs an attribution report A containing retrieved evidence snippets, along with a revision y whose content can be *attributed* to the evidence in A while *preserving* other properties of x such as style or structure. or unsupported content, colloquially called "hallucinations" (Maynez et al., 2020; Menick et al., 2022). To make LMs more trustworthy, we want to justify each generation by an *attribution report* (Rashkin et al., 2021; Bohnet et al., 2022) that contains supporting evidence from trusted sources (e.g., encyclopedia or articles) where appropriate. Most existing LMs, such as those based on sequence-to-sequence architectures, lack a builtin mechanism for attribution. Even *retrievalaugmented models* (Guu et al., 2020; Lewis et al., 2020), which retrieve relevant documents and then condition on them to generate text, still do not guarantee attribution. Prior work has shown that 16477 retrieval-augmented models generate text that either includes additional information outside the retrieved documents (Dziri et al., 2022), ignores the documents altogether (Krishna et al., 2021), or even contradicts the documents (Longpre et al., 2021). In fact, occasionally ignoring the retrievals can make the models more robust to bad retrievals (Khandelwal et al., 2020), illustrating that end-task performance and attribution are not always aligned. Instead of constraining LMs to generate attributed text, we propose a model-agnostic approach to improve the attribution of any existing LM: *Retrofit Attribution using Research and Revision* (RARR). The approach is inspired by works on fact-checking2 where simple research-and-revise workflows are effective at attributing or correcting unattributed claims made by humans (Thorne et al., 2018; Schuster et al., 2021; Thorne and Vlachos, 2021). As shown in Figure 1, after generating text with the LM, RARR does *research* to retrieve relevant evidence, and then *revises* the text to make it consistent with the evidence while preserving qualities like style or structure, enabling the revised text to be seamlessly used in place of the original. RARR can be viewed as a retrievalaugmented model where retrieval happens *after* generation rather than before. This allows RARR to stand on the shoulders of giant LMs without having to modify them to support attribution. In our effort to expand the scope of Research & Revision models to handle the output of arbitrary LMs, we make the following contributions. First, we formalize the *Editing for Attribution* **task** and propose new **metrics** that evaluate revision models not just on their ability to produce well-attributed revisions, but also on their ability to otherwise *preserve* original properties of the text. Second, we use these metrics to **benchmark** how existing revision models perform on various types of LM outputs such as knowledge-intensive statements, reasoning chains, and dialog responses. Finally, we find that existing revision models do not always generalize across many tasks (and were not originally intended to), and therefore propose a new research-and-revise **model** that leverages the power of few-shot prompting in large language models to robustly generalize across domains. ![1_image_0.png](1_image_0.png) ## 2 Task Formulation We propose the task of *Editing for Attribution* as follows. As Figure 1 shows, the input to the system is a text passage x produced by a generation model. The output is a revised text passage y along with an *attribution report* A, which contains evidence snippets e1*, . . . , e*M that support the content in y. Optionally, the attribution report can contain additional information such as the alignment between evidence snippets and relevant parts in y. We propose to measure the quality of the revised text y and attribution report A along two dimensions: (1) **attribution**: how much of the revised text y can be attributed to the evidence in A, and (2) **preservation**: how much the revised text y preserves aspects of the original text x. ## 2.1 Measuring Attribution Previously, Rashkin et al. (2021) proposed *Attributable to Identified Sources* (AIS), a human evaluation framework which considers a binary notion of attribution. Roughly speaking, a text passage y is attributable to a set A of evidence if a generic hearer would affirm the statement "According to A, y" under the context of y. A system either receives full credit (1.0) if all content in y can be attributed to A, and no credit (0.0) otherwise. We propose a more fine-grained, sentence-level extension of AIS. We ask annotators to give an AIS score for each sentence s of y, and then report the average AIS score across all sentences: $$\operatorname{Attr}_{\mathrm{AIS}}(y,A)=\operatorname*{avg}\operatorname{AIS}(s,A).\qquad(1)$$ Since the AIS score is binary, this effectively measures the percentage of sentences in y that are fully attributed to A. When judging each sentence, we also give annotators access to the surrounding sentences and other necessary context, such as the question that the text passage responded to. We also impose the maximum number of evidence snippets in the attribution report A to make it concise enough for both the annotator and downstream users. By manually inspecting 30 examples from our benchmarks, we found M = 5 snippets to be sufficient for full attribution. During model development, we define an automated metric, auto-AIS (Attrauto), that approximates human AIS judgments. We utilize the natural language inference (NLI) model from Honovich et al. (2022), which correlates well with AIS scores. For each sentence s of y, and for each evidence snippet e in A, let NLI(*e, s*) be the model probability of e entailing s. We then define $$\operatorname{Attr}_{\mathrm{auto}}(y,A)=\operatorname*{avg}\operatorname*{max}_{s\in y}\operatorname{NLI}(e,s).\quad\quad(2)$$ To improve accuracy, we decontextualize (Choi et al., 2021) each sentence based on the entire context of y before computing the scores. See Appendix B for implementation details. ## 2.2 Measuring Preservation To measure preservation, we first ask annotators to decide if the revision preserves the text's original intent (completely, somewhat, or not at all - see Appendix C for exact rubrics). Like AIS evaluation, we give annotators the necessary surrounding context. We define the binary metric Presintent(*x, y*) to be 1.0 if the revision completely preserves the original intent, and 0.0 otherwise. However, even if a revision preserves intent, it may still make superfluous modifications, such as reordering words, changing textual style, or including unnecessary additional information (Thorne and Vlachos, 2021). Different tasks have different requirements for what should be preserved. Here, we desire a simple metric that can be readily computed for many tasks and that generally penalizes unnecessary changes. We thus define a metric based on the character-level Levenshtein edit distance (Levenshtein, 1965) between x and y: $$\operatorname{Pres}_{\mathrm{Lev}}(x,y)=\operatorname*{max}{\bigg(}1-{\frac{\operatorname{Lev}(x,y)}{\operatorname{length}(x)}},0{\bigg)}\quad{\mathrm{(3)}}$$ This metric is 1.0 if x and y are the same, and 0.0 if y completely overwrites all parts of x. PresLev is generally sensitive to any kind of change, but certainly does not capture all notions of preservation (e.g., preserving rhyme schemes or puns). We want the revision to preserve the original intent while avoiding superfluous edits. To reflect this, we finally combine the two metrics as $$\text{Pres}_{\text{comb}}(x,y)=\text{Pres}_{\text{intent}}(x,y)\cdot\text{Pres}_{\text{Lev}}(x,y).\tag{4}$$ which is 0.0 if the revision changes the intent and equal to PresLev(*x, y*) otherwise. Since Presintent requires human annotation, we use PresLev as an automated metric for model development. ## 2.3 Discussion Optimizing for attribution alone cannot ensure a good revision: for example, an adversarial editor could ensure 100% attribution by simply replacing the input x with the text of any arbitrary retrieved document, which is trivially attributable to itself. Ideally, we want to maximize both attribution and preservation, while navigating any tradoffs between the two. In our experiments, we report both metrics, as well as their harmonic mean (F1AP, analogous to how recall and precision are combined in F1). We emphasize that this evaluation scheme does not require any "gold" or "reference" edits (unlike many prior evaluations of text revision models), which are often only available for specialized domains. This enables us to broaden the scope to a much wider range of generation tasks. ## 3 Approach We now present *Retrofit Attribution using Research* and Revision (RARR), a simple method for solving the *Editing for Attribution* task. As illustrated in Figure 2, given an input passage x, the research stage first generates a set of queries {q1*, ..., q*N }, each investigating one aspect of x that potentially ``` (a) Query generation x → {q1, . . . , qN } You said: Your nose switches back and forth between nostrils. When you sleep, you switch about every 45 minutes. This is to prevent a buildup of mucus. It's called the nasal cycle. To verify it, a) I googled: Does your nose switch between nostrils? b) I googled: How often does your nostrils switch? c) I googled: Why does your nostril switch? d) I googled: What is nasal cycle? (b) Agreement model (y, q, e) → {0, 1} You said: Your nose switches . . . (same as above). . . nasal cycle. I checked: How often do your nostrils switch? I found this article: Although we don't usually notice it, during the nasal cycle one nostril becomes congested and thus contributes less to airflow, while the other becomes decongested. On average, the congestion pattern switches about every 2 hours, according to a small 2016 study published in the journal PLOS One. Your nose's switching time is about every 2 hours, not 45 minutes. This disagrees with what you said. (c) Edit model (y, q, e) → new y You said: Your nose switches . . . (same as above). . . nasal cycle. I checked: How often do your nostrils switch? I found this article: Although we . . . (same as above). . . PLOS One. This suggests 45 minutes switch time in your statement is wrong. My fix: Your nose switches back and forth between nostrils. When you sleep, you switch about every 2 hours. This is to prevent a buildup of mucus. It's called the nasal cycle. ![3_image_0.png](3_image_0.png) ``` Figure 3: **Examples of few-shot examples** used to ![3_image_1.png](3_image_1.png) prompt the PaLM model (blue = input; red = output). requires attribution. For each query qi, it retrieves web documents and selects the best evidence snippets {ei1, ei2*, . . .* }. The revision stage then revises the original text x using the retrieval results {(q1, e11)*, . . .* }, yielding a revised text y. Most components for RARR are implemented using few-shot prompting (Brown et al., 2020). We use PaLM (Chowdhery et al., 2022) as our language model. Figure 3 shows some few-shot examples we use, while Appendix D lists the full prompts. ## 3.1 Research Stage Query generation We perform comprehensive question generation (CQGen) which produces a sequence of questions covering *all aspects* of the passage x that need to be verified and attributed. A similar strategy has been employed to train textplanning models (Narayan et al., 2022). A prompt with six human demonstrations was sufficient for PaLM to adequately learn the task. To increase diversity and coverage, we sample from our CQGen model three times and take the union of the resulting queries. Evidence retrieval For each query from CQGen, we use Google Search to retrieve K = 5 web pages. We extract candidate evidence snippets from each web page by running a sliding window of four sentences across the page, breaking at document headings. The evidence snippets for each query are then ranked based on their relevance to the query. For this, we use an existing query-document relevance model trained following Ni et al. (2021), which computes a relevance score Srelevance(*q, e*) between a query q and an evidence snippet e. We then keep the top J = 1 evidence for each query. The final retrieval result is [(q1, e11)*, . . . ,*(q1, e1J ), . . . ,(qN , eN1), . . . ,(qN , eNJ )], where eij denotes the j th evidence for the i th query, and N denotes the total number of queries from CQGen (which can be different for each input x). ## 3.2 Revision Stage After retrieving evidence, certain parts of x may ![3_image_2.png](3_image_2.png) now be properly attributed, but other parts remain unattributed and should be revised. As illustrated in Figure 2, the revision stage initializes the output y = x. Then for each retrieved (*q, e*) = (qi, eij ), the *agreement model* checks if the evidence e disagrees with the current output y regarding the issue in query q. If a disagreement is detected, the edit model edits y to agree with e; otherwise, it does nothing. The process continues until all retrievals are processed. Agreement model The agreement model takes the partially edited passage y, a query q, and the evidence e as input. It then decides whether both y and e imply the same answer to the question in q. This form of question-guided agreement was previously explored by Honovich et al. (2021). We implement this by few-shot prompting PaLM using a chain-of-thought style prompt (Wei et al., 2022), where we ask the model to explicitly state the implied answers for both y and e before producing its judgment about their agreement. Edit model The edit model is run only if a disagreement is detected. The model takes y, q and e as input, and outputs a new version of y that aims to agree with e while otherwise minimally altering y. We again use few-shot prompting and chain-ofthought, where we ask the model to first identify a particular span in y that needs to be edited before generating the revised y. This helps reduce the editor's deviation from the current y. 3 ## 3.3 Attribution Report Finally, we select at most M = 5 evidence snippets to form an attribution report A. Note that during evidence retrieval and revision, we may have encountered and used more than M snippets. Our goal is to find a subset of snippets that maximizes *coverage* over the potentially attributable points in the passage, as represented by the queries q1*, . . . , q*N . We use the relevance model from Section 3.1 as a proxy for measuring how much an evidence e covers the point raised by a query q. Then, we exhaustively search for A ⊆ {e11*, . . . , e*NJ } of size at most M that maximizes $$\text{Cover}(A,q_{1:N}):=\sum_{i=1}^{N}\max_{e\in A}S_{\text{relevance}}(q_{i},e).\tag{5}$$ Fact-checking Our research builds upon works to identify whether a claim is supported or refuted by the given evidence (Thorne et al., 2018; Wang, 2017; Karadzhov et al., 2017; Augenstein et al., 2019; Wadden et al., 2020). In real-world scenarios such as the one which RARR operates in, relevant evidence may not be provided, necessitating retrieval (Fan et al., 2020; Piktus et al., 2021). Post-hoc editing for factuality Recent work has gone beyond checking the validity of a claim to correcting a piece of text to be factually consistent with a set of evidence via post-hoc editing (Shah et al., 2020; Thorne and Vlachos, 2021; Schuster et al., 2021; Balachandran et al., 2022; Cao et al., 2020; Iso et al., 2020). FRUIT (Logan IV et al., 2022) and PEER (Schick et al., 2022) both implement an editor that is fine-tuned on Wikipedia edit history with the goal of updating outdated information and collaborative writing respectively. Evidence-based Factual Error Correction (EFEC; Thorne and Vlachos, 2021) also implements a full research-and-revise workflow trained on Wikipedia passages (Thorne et al., 2018). A key differentiator of RARR is its ability to edit the output of any generation model without being restricted by the domain, task, or the need for training data. Measuring attribution A key part of improving attribution is being able to quantify it. Apart from human evaluation (Rashkin et al., 2021), several automated evaluation methods have been proposed. Our work uses an entailment-based metric, which measures whether the referenced evidence entails PaLM outputs on NQ (factoid statements) ![4_image_0.png](4_image_0.png) Millie Inbetween is a British comedy television series. It premiered on ![4_image_1.png](4_image_1.png) the output text (Bohnet et al., 2022; Kryscinski et al., 2020; Goyal and Durrett, 2021). A common alternative is to evaluate whether the output text contains the same factual information as the evidence; e.g., by checking if both yield the same answer to the same question (Wang et al., 2020). We use this notion of attribution in RARR's agreement model rather than for evaluation. Retrieval-augmented models Models with a retrieval component have seen successes in question answering (Chen et al., 2017; Lee et al., 2019; Nakano et al., 2021), machine translation (Zhang et al., 2018), code generation (Hayati et al., 2018), language modeling (Khandelwal et al., 2020), and other knowledge-intensive tasks (Lewis et al., 2020). Their retrievals are not necessarily attributions (Dziri et al., 2022; Longpre et al., 2021) and typically are not used to revise an existing output. An exception is LaMDA (Thoppilan et al., 2022), a language model for dialog that performs revision by training on human annotations. ## 5 Experiments 5.1 Evaluation Setups RARR aspires to be a general-purpose method for improving the attribution of any text generation model in any text domain. We thus construct evaluation benchmarks by taking the task input from three diverse datasets, and prompting different generation models to produce *long-form outputs* which may contain "hallucinations," as demonstrated in Figure 4. These long-form outputs serve as input text passages to RARR. We generate 150 development and 150 test passages for each combination of generation model and source dataset. Factoid statements We prompt PaLM 540B and GPT-3 text-davinci-002 to generate long-form answers to questions from the Natural Questions dev set (NQ; Kwiatkowski et al., 2019). The resulting passages are mostly coherent but often contain factual errors. This setup examines the ability to attribute a diverse range of factoid knowledge. Reasoning chains Language models can generate reasoning chains to answer complex questions (Wei et al., 2022). We use PaLM and GPT-3 to generate reasoning chains for the StrategyQA train set (SQA; Geva et al., 2021). This setup tests whether the revision model can provide better attribution for intermediate steps of reasoning, while preserving the overall reasoning process. Knowledge-intensive dialogs We consider the conversational QA task from the QReCC dev set (Anantha et al., 2021). Given the previous dialog turns, which are rounds of questions and answers (Q1, A1, Q2, A2*, . . . , Q*k), we use LaMDA and GPT-3 to answer to the final question Qk conditioned on the dialog history. The answer tends to be context-dependent, featuring pronouns and implicit references. All dialog turns are given alongside the answer as inputs to the revision model. ## 5.2 Models We compare RARR to several systems that have a research-and-revise workflow. EFEC We consider EFEC (Thorne and Vlachos, 2021) as a representative fine-tuned editor. EFEC fine-tunes a T5-based model to revise text conditioned on multiple evidence snippets using both semi-supervised and fully-supervised approaches. We compare against their fully-supervised approach, which performed best in their experiments. EFEC uses a neural retrieval model (Karpukhin et al., 2020) to retrieve from Wikipedia; however, not all passages in our experiments are supported by Wikipedia articles. To more fairly compare the editing capabilities of EFEC, we instead use the evidence retrieved by our research stages (CQGen and web search). Note that the EFEC editor conditions on multiple pieces of evidence at once, while our editor iteratively conditions on one at a time. LaMDA LaMDA (Thoppilan et al., 2022) generates responses in three steps: 1) generate a "base response"; 2) generate search queries from the base response; 3) generate a "revised response" conditioned on the base response and retrieved evidence. | Attribution | Preservation | | | | | | |------------------------|----------------|-------------|--------|------|-----------|------| | Model | auto-AIS | AIS | intent | Lev | comb F1AP | | | PaLM outputs on NQ | | | | | | | | EFEC | 45.6 → 64.3 | 35.4 → 48.3 | 16.0 | 39.1 | 10.4 | 17.1 | | LaMDA | 39.5 → 49.9 | 18.3 → 30.4 | 26.0 | 39.6 | 21.1 | 24.9 | | RARR | 45.6 → 54.9 | 35.4 → 43.4 | 90.0 | 89.6 | 83.1 | 57.0 | | PaLM outputs on SQA | | | | | | | | EFEC | 37.8 → 58.6 | 24.5 → 51.7 | 6.0 | 31.0 | 3.8 | 7.1 | | LaMDA | 32.7 → 43.2 | 15.8 → 27.0 | 40.0 | 46.4 | 33.7 | 30.0 | | RARR | 37.6 → 45.1 | 24.5 → 31.5 | 92.6 | 89.9 | 84.6 | 45.9 | | LaMDA outputs on QReCC | | | | | | | | EFEC | 19.1 → 47.4 | 13.2 → 48.7 | 39.7 | 39.4 | 23.7 | 31.9 | | LaMDA | 16.4 → 36.2 | 16.0 → 27.1 | 21.3 | 24.8 | 12.0 | 16.6 | | RARR | 18.8 → 29.4 | 13.2 → 28.3 | 95.6 | 80.2 | 78.1 | 41.5 | ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) To apply LaMDA on a given text x, we simply set the base response in step 1 to x, and then run steps 2 and 3 (we call these latter two stages "LaMDA Research"). LaMDA was trained as a dialog system, and always expects a dialog context where the user speaks first. So, for non-dialog tasks, we insert an artificial user utterance as dialog history: "Tell me something interesting." For the attribution report, we take all evidence documents retrieved by LaMDA during its research process. RARR Our model uses few-shot prompting on PaLM 540B for query generation, the agreement model, and the edit model. We use the same prompts for all tasks except when the context comes from a dialog, where we slightly modify the prompts to use the dialog context (e.g., CQGen now maps dialog context + x to queries). The queryevidence relevance model Srelevance is a pretrained T5-large model (Raffel et al., 2020) fine-tuned following Ni et al. (2021) on MS MARCO (Nguyen et al., 2016). See Appendix D for the few-shot prompting strategies and more modeling details. ## 5.3 Results For the main experiments, we report results on passages generated by PaLM and LaMDA. Results on GPT-3 passages show similar trends (Appendix A). Table 1 and Figure 5 show attribution and preservation results for each model and dataset. We also report F1AP, the harmonic mean of the two metrics, which is shown as level curves in Figure 5. ![6_image_0.png](6_image_0.png) RARR significantly improves attribution while preserving most of the original text. In terms of F1AP, RARR is the only method that performs robustly across all three datasets, and significantly outperforms prior methods on NQ and SQA. We found that RARR is the only method that preserves the original intent of x over 90% of the time - EFEC and LaMDA only manage to preserve the original intent 6–40% of the time. We also see that editing is crucial to improve attribution: if we only retrieve evidence to support the original response x without editing, attribution ranges from the low 10s to mid 30s. After editing, RARR can increase attribution by up to 13% absolute, while changing only 10–20% of the text. As noted in Section 2, one can sacrifice preservation for higher attribution. EFEC is able to obtain strong F1AP on QReCC by making larger changes to the text in exchange for a higher attribution score. However, it occupies a very different point from RARR on the attribution-preservation tradeoff curve, as visualized in Figure 5. ## 6 Analysis 6.1 Qualitative Analysis Human oracle To understand the remaining headroom in our task, we ask: what is the minimal amount of editing needed to make a text passage fully attributed? The answer would depend on the quality of the LM that generated the text as well as the task difficulty. As an approximation, we manually edited 30 examples in our NQ benchmark until we judged them to be 100% attributable. We achieved a preservation score of 88%, which (when combined with 100% attribution) translates to 93.6 F1AP, indicating a significant headroom. ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) report in **2015**. Attribution: 100% **Preservation:** 100% evidence: The 7th Central Pay Commission (Chair: Justice A. K. Mathur) **submitted its report on November 19, 2015**. The Commission had been **appointed in February 2014**, to look at remuneration for central government employees. . . . Analyzing the baselines As exemplified in Figure 6, EFEC frequently attempts to summarize the entire passage into one sentence, or drops later sentences. This is likely due to EFEC's training data, which was limited to single sentences. This behavior generally increases the attribution score, because it is usually easier to make one sentence fully attributable than many sentences. However, in datasets where the claim contains multiple sentences (NQ and SQA), such a behavior yields low preservation scores, and also results in outputs that are less informative. We expect that EFEC could perform much better if its training data were augmented to include multiple sentences. LaMDA Research achieves similar attribution scores to RARR. But as mentioned in Section 5.2, the intent and linguistic style of the output tend to deviate from the input, resulting in lower preservation scores (Figure 6). We emphasize that this is not a purely apples-to-apples comparison since LaMDA was not optimized for preservation. Overall, these experiments are mainly meant to illustrate that prior models were simply not designed for the task of Editing for Attribution, rather than to mark RARR as the best method. Analyzing RARR For the research stage, the question generation model had comprehensive coverage: a manual inspection of 40 examples shows > 80% with questions that fully cover all aspects of the input text. The retriever was strongest at re- (a) **Correctly revising an entity** ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) y: If She Knew What She Wants was written by **Henry Roth**. e: [en.wikipedia.org] "If She Knew What She Wants" is a song written by American singer-songwriter **Jules Shear** and introduced on . . . y ′: If She Knew What She Wants was written by **Jules Shear**. (b) **Correctly revising a number** y: God Save the Queen became the British national anthem in **1745**. . . . e: [www.britannica.com] The oldest national anthem is Great Britain's "God Save the Queen," which was described as a national anthem in **1825**, . . . y ′: God Save the Queen became the British national anthem in **1825**. . . . (The year 1745 was when the song was first performed.) (c) **Performing a necessary larger revision** y: "It's My Party" is a song written and composed by American singersongwriter and producer **Walter Gold**. The song was published in **1962** and first recorded by American singer-songwriter, **producer and arranger** Lesley Gore. e: [secondhandsongs.com] Song: It's My Party written by **John Gluck, Wally** Gold, Seymour Gottlieb, Herbert Weiner. . . y ′: "It's My Party" is a song written and composed by **John Gluck, Wally Gold,** Seymour Gottlieb and Herb Weiner. The song was published in **1963** and first recorded by American **singer** Lesley Gore. (d) **Confused by misleading evidence** y: In the Heat of the Night is a police procedural crime drama TV series produced by MGM Television. It aired on NBC from September 8, 1988 to May 3, 1992. . . . e: [metv.com] Watch In the Heat of the Night on **MeTV**. y ′: In the Heat of the Night is a police procedural crime drama TV series produced by MGM Television. It aired on **MeTV** from September 8, 1988 to May 3, 1992. . . . (The show did actually air on NBC in the 1980s, but the model fails to realize that MeTV is a re-run much later in time.) (e) **Improved attribution, but reasoning doesn't adapt to new info** (Q: Does Homer Simpson need two hands worth of fingers to count to 5?) y: Homer Simpson has 5 fingers and 2 hands. Therefore, he does not need two hands worth of fingers to count to 5. He only need one hand to count to 5. e: [refinery29.com] Other iconic animated characters, such as Homer Simpson and his brood, . . . only have **four fingers**, too. y ′: Homer Simpson has 4 fingers and 2 hands. Therefore, he does not need two hands worth of fingers to count to 5. He only need one hand to count to 5. (Having only 4 fingers per hand, he does need two hands to count to 5.) searching content involving distinct entities (e.g., a movie, a major event, or a person). In contrast, we found significant headroom for better attribution of statements involving generic objects and more abstract claims (e.g. *"Video games require electricity."*— since this is obvious to most humans, retrieved articles from the web tend to address related but different topics). We suspect that a significant amount of attribution headroom on our benchmarks would benefit from a better research stage. For the revision stage, RARR was able to revise many unattributed claims, especially those involving entities and numbers (Figures 7a and 7b). It can also perform larger revisions when necessary (Figure 7c). Moreover, RARR abstains from editing when the claim is already well-attributed: on NQ, among the inputs with near-perfect attribution (preedit AttrAIS > 0.9), RARR does not make an edit in 90% of the cases. However, the system also has several shortcomings. Some erroneous edits arise from misleading irrelevant evidence (Figure 7d). We also observed an interesting challenge when revising reasoning chains, where the model successfully revised an incorrect claim, but did not revise subsequent reasoning steps that depend on the earlier claim (Figure 7e). In this case, further editing to improve logical coherence could help. ## 6.2 Ablations Ablating query generation RARR uses generated questions as search queries for evidence retrieval. We consider two natural alternatives: using the entire input passage as a single search query, or using each sentence as a search query. For the former, we retrieve J = 3 evidence snippets to make the amount a closer match to other methods. The results are in Table 2. Using the entire input passage as the query gives poor results, as the retrieved evidence tends to not focus on potentially unattributed parts in the passage. Using sentences as queries gives results closer to the full CQGen, but a closer analysis reveals two caveats. First, sentences-as-queries are more effective when such sentences "mimic" content on the Web, and are less effective otherwise. In Table 3, we test this by excluding all of Wikipedia from web search results (since many PaLM outputs for NQ have a Wikipedia style). The attribution performance of sentences-as-queries drops significantly, while CQGen is more robust. Second, sentence-as-queries tends to retrieve passages that may encourage confirmation bias. Consider the example *"Georgia is called the Peach* State, but California actually produces the most peaches." Retrieval using sentences-as-queries found an article echoing that California produces the most peaches, while CQGen generated the more impartial query "Which state produces the most peaches?" and found a newer article saying that South Carolina replaced California as the top peach producer. In this case, RARR using CQGen needs to sacrifice more preservation score to edit the text, leading to a lower F1AP score. This underscores that attribution alone cannot measure "correctness" since not all evidence is up-to-date or reliable. Ablating agreement model We try removing the agreement model, which effectively forces the model to revise the passage based on every retrieved evidence. The results are shown in Table 2. As expected, more revision leads to less preservation score and spurious changes to the text passage, as demonstrated in Figure 8. Impact on downstream task performance We have measured preservation using the metric de- | PaLM outputs on NQ | PaLM outputs on SQA | LaMDA outputs on QReCC | | | | | | | | |----------------------|-----------------------|--------------------------|------|-------------|---------|------|-------------|---------|------| | Model | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | | Full RARR | 45.6 → 54.9 | 89.6 | 68.1 | 37.6 → 45.1 | 89.9 | 60.0 | 18.8 → 29.4 | 80.2 | 43.1 | | no agreement model | 45.6 → 50.6 | 82.6 | 62.8 | 37.8 → 46.9 | 83.4 | 60.0 | 18.8 → 28.8 | 72.0 | 41.2 | | query = input | 45.4 → 47.2 | 98.4 | 63.8 | 39.4 → 30.3 | 98.8 | 46.4 | 19.7 → 20.6 | 96.3 | 34.0 | | query = sentence | 49.1 → 52.1 | 97.0 | 67.8 | 43.7 → 44.3 | 98.8 | 61.2 | 19.0 → 19.6 | 97.0 | 32.6 | | NQ F1AP | SQA F1AP | | | | |------------------|------------|---------|------|---------| | Model | orig | no wiki | orig | no wiki | | Full RARR | 68.1 | 64.3 | 60.0 | 57.6 | | query = sentence | 67.8 | 60.3 | 61.2 | 56.7 | Table 3: **The impact of excluding Wikipedia from the** retrieval corpus. CQGen (full RARR) is more robust to Wikipedia's absence, while using sentences-as-queries suffers a bigger drop in performance. | starfish was introduced to the Great-Barrier-Reef by ocean currents. | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | e: [invasivespeciesinfo.gov] Ballast water is one of the major pathways for the introduction of nonindigenous marine species. . . y: The Crown-of-thorns starfish is native to the Great Barrier Reef. . . The starfish was introduced to the Great-Barrier-Reef by ballast water. | Figure 8: **Disabling the agreement model leads to** ![8_image_0.png](8_image_0.png) over-edits. Here, the evidence e does not explicitly disagree with x, but without an agreement model to detect this, the edit model makes an unsupported change. fined in Section 2.2. However, another measure of preservation is whether the revised text can still be used to perform the task that it was originally generated for. Following EFEC, we quantitatively evaluate this on short answer tasks NQ and SQA, and we summarize the result in Figure 9. For NQ, each original text x is a long-form response to a factoid question. To determine whether the revised text y still serves this purpose, we feed the factoid question and y back into PaLM and prompt it to extract a *short* answer from y. We find that RARR not only preserves the short answer accuracy but actually improves it by roughly 5%. For SQA, each original text is a reasoning chain that helps to answer a yes/no question. We feed the SQA question and y back into PaLM and prompt it to output a yes/no answer, and evaluate answer accuracy. Here, we find that increasing attribution comes at a slight cost in downstream task performance: answer accuracy drops modestly for all revision models (up to 2.6%). We suspect that this may be due to noisy retrievals, which sometimes provide misleading evidence (exemplified in Figure 7d). Furthermore, even though revisions can address factoid errors in the passage (e.g., *"Homer* Simpson has 5 fingers" from Figure 7e), RARR currently does not try to modify subsequent reasoning steps which may no longer be logically entailed (e.g., *"He only needs one hand to count to 5"*). ## 7 Conclusion Language models have developed increasingly good "procedural" knowledge of what should be discussed and how it should be presented, but often struggle to memorize "factoid" knowledge and produce unsubstantiated claims. We proposed RARR, a framework for revising such claims to make them attributable to the researched evidence. From experiments on text passages generated by different models on various domains, we showed that RARR can revise the passages to improve attribution while preserving other desirable properties such as writing style or structure. Furthermore, RARR sits on top of existing generation models without needing to re-design or re-train LMs. Major headroom still remains, as discussed in Section 6 and the Limitations section. We hope our analysis of RARR would help with developing new approaches for integrating attribution to LMs. ## 8 Limitations Limitations of our task definition Depending on the application, attribution and preservation may not deserve equal weight. For instance, if there are multiple acceptable options for the output, such as in a dialog system, we might trade-off preservation for attribution, similar to how LaMDA behaves in our experiments. Our evaluation metrics also do not measure all aspects of attribution. For instance, some sentences are self-evident and do not require attribution (e.g., "I agree.") but would be penalized in our evaluation. It is also necessary to note that linguistic assertions have varying scope: for example, there is a difference between *"Frozen is a scary movie"* and "I got scared watching Frozen" - while expressing a similar sentiment, the former makes a more general statement that many would disagree with, while the latter is scoped to the speaker's own experience. In some applications, one could even argue that the latter case does not require attribution, since the speaker is their own source-of-truth. In addition to varying scope, utterances can also make assertions with varying levels of directness. For example, according to standard linguistics, "John ate some of the cookies" yields the implicature that John did not eat all of the cookies, even though it is not logically entailed. This raises the question of which implicatures or implied assertions should be detected and attributed, which should be explored in future work. For more nuances, we refer to Rashkin et al. (2021). For preservation, we wish to explore other properties that should be preserved, such as discourse or logical coherence. Additionally, if the input text passage is completely misguided or flawed, it can be difficult to revise the text without significant changes, which would be heavily penalized by the current metrics. Limitations of our model While we aspire to improve attribution for arbitrary text, it is clear that RARR is not yet fully general. For example, the current implementation of RARR would not be well-prepared to edit poetry (where preserving rhyme matters) or long documents, primarily because we do not provide examples of such inputs in our few-shot LLM prompts. However, we do believe that future developers may be able to quickly adapt RARR to such tasks by simply changing the prompts. Second, RARR tends to preserve rather than delete claims that it cannot attribute. Some of these claims genuinely do not require attribution, but others are hallucination and should be removed. Judging whether a claim requires attribution can be subjective and challenging. Finally, our model is computationally costly, since it is based on prompting a large language model. One potential solution is to leverage recent synthetic data generation recipes to train a smaller model (Lee et al., 2021; Schick et al., 2022). ## 9 Ethical Considerations Partial attribution When RARR is not 100% successful in making text consistent with retrieved evidence, the revised text will be partially attributed. One could identify unattributed parts using either the automated attribution score (AttrAIS) or the relevance scores used to generate the attribution report (Section 3.3). Such information should be presented to avoid misleading readers into thinking that the entire revision is attributed. Evidence trustworthiness RARR seeks to improve attribution for the output of any generative model. However, even if RARR can attribute content to a particular source, the user must still consider whether the source itself is trustworthy. Even for sources that are traditionally considered "authoritative" (such as an encyclopedia), there may still be factual inaccuracies or biases. This work does not address the question of whether a source is trustworthy, or the related topic of misinformation. While we do not provide a means for judging trustworthiness, the design of RARR does allow for the research stage to restrict its search over a userspecified corpus, based on what the user deems trustworthy. Conflicting evidence There is also the possibility that some content may be simultaneously supported by certain sources, while contradicted by others. This can easily occur for content involving subjective or imprecise claims. The current implementation and evaluation for RARR does not explicitly address this issue - we adopted a "permissive" definition of attribution, where we consider content to be attributed if there exists any source that supports it. For some applications, a more restrictive definition that requires both existence of supporting sources and absence of contradicting sources would be needed. ## Acknowledgments We wish to thank Raphael Hoffmann, Slav Petrov, Dipanjan Das, Michael Collins, Iftekhar Naim, Kristina Toutanova, William Cohen, Sundeep Tirumalareddy, Samer Hassan, Quoc Le and Heng-Tze Cheng for their research mentorship, feedback and support. We are grateful to Hao Zhou and Petr Pilar for helping us experiment with LaMDA and motivating our dialog experiments. We also wish to thank Tal Schuster for pointing us to relevant work in the fact checking literature, and helping us reproduce it. We thank Vitaly Nikolaev, David Reitter and Roee Aharoni for helping us use AIS and auto-AIS. We also wish to thank Jianmo Ni and Honglei Zhuang for developing the query-evidence relevance model we use, Daniel Andor for developing the sentence decontextualization model we use, and Ran Tian for the initial prototype of CQGen. Finally, we thank Kathy Meier-Hellstern, Philip Parham and Diane Korngiebel for their thoughtful feedback on ethical considerations. ## Contributions Luyu Gao: Designed RARR's few-shot prompting strategies and implemented the first PaLM-based prototype. Analyzed results, and advised on the design of human and automatic evaluation. Zhuyun Dai: Proposed the evaluation setup of editing long-form generations from PaLM/LaMDA on various QA datasets. Hosted and mentored Luyu Gao (student researcher) in prototyping RARR. Implemented the final models, designed overall experiments, and obtained main results and ablations (together with Ice Pasupat). Contributed many parts of the writing. Ice Pasupat: Implemented the final models, designed overall experiments, and obtained main results and ablations (together with Zhuyun Dai). Automated experimental infrastructure, conducted error analyses, and oversaw many parts of the paper writing. Anthony Chen: Developed the automatic evaluation for attribution and preservation and worked with Arun Chaganty to design human evaluation. Developed the open-source implementation (GPT3 RARR), made improvements to prompts, and helped with writing. Arun Chaganty: Led and implemented all human evaluation. Proposed the two-dimensional attribution + preservation metric (together with Kelvin Guu). Advised on model design and contributed many parts of the writing. Yicheng Fan: Worked with Kelvin Guu to develop the first prototype of RARR. Proposed multiple retrieval strategies and implemented the EFEC baseline. Vincent Zhao: Co-hosted and mentored Luyu Gao (student researcher) in prototyping RARR. Enabled bulk inference for PaLM. Proposed the downstream task evaluation. Ni Lao: Research mentorship, advising and contributed many parts of the writing. Hongrae Lee: Research mentorship and advising. Helped integrate RARR with Google Search and evaluate LaMDA. Da-Cheng Juan: Research mentorship and early design discussions. Kelvin Guu: Proposed the original research-andrevise concept, implemented the first prototype, initiated the project and involved all collaborators. Implemented baselines (together with Yicheng Fan). Research mentorship, oversaw project coordination and paper writing. ## References Michael Ahn et al. 2022. Do as I can, not as I say: Grounding language in robotic affordances. *ArXiv*, abs/2204.01691. Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen G. Pulman, and Srinivas Chappidi. 2021. Open-domain question answering goes conversational via question rewriting. In NAACL. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidencebased fact checking of claims. In *EMNLP*. Vidhisha Balachandran, Hannaneh Hajishirzi, William Cohen, and Yulia Tsvetkov. 2022. Correcting diverse factual errors in abstractive summarization via postediting and language model infilling. In *EMNLP*. Bernd Bohnet, Vinh Quang Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. ArXiv. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *EMNLP*. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *NeurIPS*. Mengyao Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correction for abstractive summarization models. In *EMNLP*. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In ACL. Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. *TACL*, 9:447–461. Aakanksha Chowdhery et al. 2022. PaLM: Scaling language modeling with pathways. *ArXiv*, abs/2204.02311. Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, and Siva Reddy. 2022. On the origin of hallucinations in conversational models: Is it the datasets or the models? In *NAACL*. Alexander R. Fabbri, Wojciech Kryscinski, Bryan McCann, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating summarization evaluation. TACL, 9:391–409. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In ACL. Angela Fan, Aleksandra Piktus, Fabio Petroni, Guillaume Wenzek, Marzieh Saeidi, Andreas Vlachos, Antoine Bordes, and Sebastian Riedel. 2020. Generating fact checking briefs. In *EMNLP*. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did Aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *TACL*, 9:346–361. Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In *NAACL*. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrievalaugmented language model pre-training. In *ICML*. Shirley Anugrah Hayati, Raphaël Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, and Graham Neubig. 2018. Retrieval-based neural code generation. In *EMNLP*. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In *ICLR*. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Y. Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In *Workshop on Documentgrounded Dialogue and Conversational Question Answering*. Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. Q 2: Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering. In *EMNLP*. Hayate Iso, Chao Qiao, and Hang Li. 2020. Fact-based text editing. In ACL. Georgi Karadzhov, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, and Ivan Koychev. 2017. Fully automated fact checking using external sources. In RANLP. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘ Lewis, Ledell Yu Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for open-domain question answering. In *EMNLP*. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In *ICLR*. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTaiL: A textual entailment dataset from science question answering. In *AAAI*. Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answering. In *NAACL*. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In EMNLP. Tom Kwiatkowski et al. 2019. Natural Questions: A benchmark for question answering research. *TACL*, 7:453–466. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In ACL. Kenton Lee, Kelvin Guu, Luheng He, Timothy Dozat, and Hyung Won Chung. 2021. Neural data augmentation via example extrapolation. *ArXiv*, abs/2102.01335. Vladimir I. Levenshtein. 1965. Binary codes capable of correcting deletions, insertions, and reversals. *Soviet* physics. Doklady, 10:707–710. Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledgeintensive NLP tasks. In *NeurIPS*. Robert L Logan IV, Alexandre Passos, Sameer Singh, and Ming-Wei Chang. 2022. FRUIT: Faithfully reflecting updated information in text. In *NAACL*. Shayne Longpre, Kartik Kumar Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In *EMNLP*. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan T. McDonald. 2020. On faithfulness and factuality in abstractive summarization. In ACL. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy CampbellGillingham, Geoffrey Irving, and Nathan McAleese. 2022. Teaching language models to support answers with verified quotes. *ArXiv*, abs/2203.11147. Reiichiro Nakano, Jacob Hilton, S. Arun Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2021. WebGPT: Browser-assisted question-answering with human feedback. *ArXiv*, abs/2112.09332. Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Dipanjan Das, and Mirella Lapata. 2022. Conditional generation with a question-answering blueprint. ArXiv, abs/2207.00397. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In *CoCo@NIPS*. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, and Yinfei Yang. 2021. Large dual encoders are generalizable retrievers. *ArXiv*, abs/2112.07899. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. *ArXiv*, abs/2112.00114. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? In *EMNLP*. Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Ouguz, Edouard Grave, Wen tau Yih, and Sebastian Riedel. 2021. The web is your oyster - knowledge-intensive nlp against a very large web corpus. *ArXiv*, abs/2112.09924. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *JMLR*, 21. Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2021. Measuring attribution in natural language generation models. *ArXiv*, abs/2112.12870. Adam Roberts, Colin Raffel, and Noam M. Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In *EMNLP*. Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, and Sebastian Riedel. 2022. PEER: A collaborative language model. *ArXiv*, abs/2208.11663. Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with contrastive evidence. In *NAACL*. Darsh J. Shah, Tal Schuster, and Regina Barzilay. 2020. Automatic fact-guided sentence modification. In AAAI. Richard Shin, Christopher H Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, and Benjamin Van Durme. 2021. Constrained language models yield few-shot semantic parsers. In *EMNLP*. Romal Thoppilan et al. 2022. LaMDA: Language models for dialog applications. *ArXiv*, abs/2201.08239. James Thorne and Andreas Vlachos. 2021. Evidencebased factual error correction. In ACL. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In *NAACL*. David Wadden, Kyle Lo, Lucy Lu Wang, Shanchuan Lin, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In *EMNLP*. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In ACL. William Yang Wang. 2017. "Liar, liar pants on fire": A new benchmark dataset for fake news detection. In ACL. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *ArXiv*, abs/2201.11903. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL. Jingyi Zhang, Masao Utiyama, Eiichiro Sumita, Graham Neubig, and Satoshi Nakamura. 2018. Guiding neural machine translation with retrieved translation pieces. In *NAACL*. Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In *NAACL*. ## A Additional Experiments And Analysis datasets. We will release this open-source version of RARR that uses GPT-3 as the backbone. Results on GPT-3 passages Table 5 shows automated evaluation results on passages generated by GPT-3. The results follow the same trend as the results on PaLM and LaMDA passages. Challenging domains We report results on tasks where attribution was particularly hard, and significant future work is needed. We considered news article summaries produced by summarization models from SummEval (Fabbri et al., 2021) (e.g., "John Doe was left homeless when the storms hit Staten Island, New York . . . "). Results are shown in Table 6. First, we note that the before-edit auto-AIS scores for all models are low. These news article summaries are often about less widely known people and events, which is challenging for retrievers, leading to low attribution. For example, our query generator may ask *"where* does John Doe live" but get results for a different John Doe. EFEC and LaMDA also face this issue, but instead trade preservation for attribution and rewrite the text to a different topic. This result suggests that using web search with standard question generation methods may fail to capture important context from the input, and is not sufficient for the attribution task. We also considered long-form explanations generated by PaLM for the ELI5 dataset (Fan et al., 2019) (Table 6). ELI5 was collected from online forums, so many answers tend to have subjective opinions instead of specific entities and facts (e.g., "How do our brains interpret scary music? To me, scary music often sounds a little bit like a person . . . "), and are thus difficult to attribute. Sometimes the whole output is based on a false premise and needs to be completely rewritten, in which case RARR cannot satisfactorily edit due to our revision threshold (Section 3.2). Finally, we considered technical explanations to questions from the MMLU dataset (Hendrycks et al., 2021) which covers diverse subjects from social science, humanities, STEM, and others.4 An example input looks like "Every time you remove an edge from a complete graph, you divide it into two connected components. So, a complete graph with 13 vertices must have 12 connected components." Results are shown in Table 7. RARR im-4MMLU has questions from 57 subjects; we took 10 random question from each topic and generated answer explanations by prompting PALM 540B. Model variance The main experiments in Section 5 are based on a single run. We ran automated evaluation on 3 random runs of RARR, using PaLM outputs on NQ as input passages. The standard deviations of Attrauto, PresLev, and F1AP are 1.2, 0.5, and 1.0 respectively. Impact of the retriever choice We tried using Microsoft Bing in place of Google Search, with near identical results (< 1% difference). Impact of model scale Many components in RARR work by few-shot prompting PaLM, a large 540B parameter LM. To assess the benefit of LM scaling, we replaced PaLM 540B with a smaller 62B parameter PaLM. As shown in Table 4, we found that 540B outperforms 62B by a large margin, suggesting that RARR could potentially further improve with even more scaling. We also experimented with keeping the editor stage at 540B while shrinking the query generation stage to 64B - this yielded a relatively small performance drop, suggesting that model scaling is more important for the editor. Impact of model type Few-shot prompting has proven to be effective for many recent large language models. We try replacing the query generation model, agreement model, and edit model with GPT-3 text-davinci-003. The few-shot prompts were slightly tuned to fit the GPT-3 model. Table 4 shows the results, which are slightly better than RARR implemented with PaLM 540B on all three | PaLM outputs on NQ | PaLM outputs on SQA | LaMDA outputs on QReCC | | | | | | | | |-----------------------|-----------------------|--------------------------|------|-------------|---------|------|-------------|---------|------| | Model | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | | Full RARR | 45.6 → 54.9 | 89.6 | 68.1 | 37.6 → 45.1 | 89.9 | 60.0 | 18.8 → 29.4 | 80.2 | 43.1 | | qgen 62B, editor 540B | 45.9 → 54.6 | 87.8 | 67.4 | 37.0 → 40.5 | 90.0 | 55.9 | 15.8 → 28.4 | 76.1 | 41.4 | | qgen 62B, editor 62B | 45.9 → 49.9 | 91.0 | 64.4 | 37.0 → 38.3 | 93.0 | 54.2 | 15.8 → 21.9 | 71.6 | 33.5 | | GPT-3 | 44.3 → 55.0 | 90.6 | 68.5 | 38.6 → 46.6 | 89.3 | 61.2 | 18.3 → 28.6 | 89.8 | 43.4 | Table 4: **Additional ablation results.** We report the automatic metrics: Attrauto, PresLev, and harmonic mean between the two (F1AP). We show auto-AIS scores both before and after editing (before → edit), with respect to the attribution report A produced by the model. | GPT-3 outputs on NQ | GPT-3 outputs on SQA | GPT-3 outputs on QReCC | | | | | | | | |-----------------------|------------------------|--------------------------|------|-------------|---------|------|-------------|---------|------| | Model | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | Attrauto | PresLev | F1AP | | EFEC | 48.3 → 66.8 | 41.5 | 51.2 | 32.6 → 50.6 | 29.4 | 37.2 | 26.4 → 53.1 | 39.0 | 44.9 | | LaMDA | 36.2 → 61.1 | 45.9 | 52.4 | 22.3 → 27.3 | 43.3 | 33.5 | 19.0 → 33.9 | 28.3 | 30.8 | | PaLM RARR | 48.3 → 57.2 | 89.6 | 69.8 | 32.6 → 36.3 | 91.6 | 52.0 | 26.4 → 31.1 | 87.7 | 45.9 | | GPT-3 RARR | 48.0 → 59.3 | 91.8 | 72.0 | 34.7 → 37.0 | 91.8 | 52.8 | 23.2 → 25.3 | 89.7 | 39.5 | Table 5: **Results on passages from GPT-3.** We report the automatic metrics: Attrauto, PresLev, and harmonic mean between the two (F1AP). We show auto-AIS scores both before and after editing (before → edit), with respect to the attribution report A produced by the model. The results show similar trends as the results on passages from PaLM and LaMDA in Table 1. Table 6: **Results on ELI5 and SummEval.** | Model | Attrauto | PresLev | F1AP | |----------|-------------|-----------|--------| | SummEval | | | | | EFEC | 17.9 → 34.6 | 20.9 | 26.0 | | LaMDA | 10.3 → 28.8 | 28.1 | 28.4 | | RARR | 18.3 → 16.9 | 92.9 | 28.6 | | ELI5 | | | | | EFEC | 18.2 → 41.2 | 17.2 | 24.2 | | LaMDA | 19.9 → 40.1 | 31.2 | 35.1 | | RARR | 18.5 → 18.9 | 97.2 | 31.7 | Table 7: **RARR results on MMLU.** | RARR | | | | |-----------------|-------------|---------|------| | MMLU Category | Attrauto | PresLev | F1AP | | Humanities | 26.6 → 29.6 | 6.6 | 45.0 | | Social Sciences | 35.5 → 40.7 | 7.6 | 56.5 | | STEM | 37.8 → 41.5 | 7.2 | 57.4 | | Other | 36.9 → 41.7 | 7.1 | 57.6 | proves attribution of the explanations on all four categories of MMLU, although the increases are relatively small. We also found that RARR's performance is low on examples with mathematical reasoning, as these are beyond the capability of the edit model with our current prompt. ## B Details On Automated Evaluation Sentence splitting When computing the attribution score, we use spaCy en_core_web_sm v3.0.0a1 to segment the text passage into sentences. (More recent models gave similar results.) While each sentence may contain multiple claims that could be attributed independently, there is currently no linguistic consensus on what constitutes a claim. Instead of depending on a particular definition of claims, we use sentences as claims for simplicity and reproducibility. The same segmentation is also used for human evaluation. Decontextualization We decontextualize each sentence in the text passage before computing the attribution score. We use the model from Choi et al. (2021), which is a T5 model fine-tuned to map the input "[HEAD] [SEP] *context and passage* [start] *sentence* [end]" to the output "[OPCODE] *decontextualized sentence*", where the OPCODE can be "done" (success), "un" (unnecessary), or "imp" (impossible). We feed the passage's context (questions for NQ and SQA; dialog context for QRECC) along with the passage itself to the input. We use beam search with beam size 8 and discard any result whose number of tokens differ by more than 4. NLI model We obtained a newer version of the end-to-end NLI model from the authors of Honovich et al. (2022), which was trained on MNLI, SNLI, FEVER, PAWS, SciTail and VitaminC (Williams et al., 2018; Bowman et al., 2015; Thorne et al., 2018; Zhang et al., 2019; Khot et al., 2018; Schuster et al., 2021). The model is a T5 ![15_image_0.png](15_image_0.png) model fine-tuned to map the input "premise: *evidence* hypothesis: *claim sentence*" to either "1" (entailed) or "0" (not entailed). As suggested by the authors, we use the probability of producing "1" as the entailment score. Comparing human and automated evaluation We conducted correlation studies between human and automatic metrics and found strong Pearson correlation (attribution = 0.74; preservation = 0.62). We visualize the correlation between human and automated attribution scores on NQ and SQA in Figure 10. We found that the AIS scores from human correlate well with auto-AIS scores, with some bias for non-attributed sentences to be judged as attributed by auto-AIS. ## C Details On Human Evaluation To end-goal of RARR is to improve the *attribution* of generation models through post-editing while *preserving* the original intent. Attribution and preservation are both subjective properties that may change with even small edits. In the main paper, we present two automatic metrics to conveniently gauge these properties, but rely on a human evaluation as the gold standard. In this section, we describe how we conducted the human evaluation and what instructions and examples annotators were provided. Rater recruitment and training We engaged with a vendor supplier of full-time crowd workers to recruit human annotators for our task. Annotators were asked to review the instructions below and were provided direct feedback on their responses during the pilot annotation runs. We had 3 annotators rate each example in the pilot phase to measure inter-annotator agreement, and had a single rater annotate each example afterwards. ## C.1 Instructions: Overview In this task you will evaluate the quality of text generated by a system (**the "passage"**) based on how well it represents information from multiple pieces of **"evidence"**. We will be using two categories to evaluate the quality of the passage: **Attribution** and **Intent** Similarity. You will evaluate these categories in succession. In some tasks, you will only evaluate Attribution. The task interface will guide you through the flow; you can also see the overall task flow in the diagram below. Note: The passage may appear very fluent and well-formed, but still contain slight inaccuracies that are not easy to discern at first glance. Pay close attention to the text. Read it carefully as you would when proofreading. ## C.2 Instructions: Attribution In this step, you will evaluate how much of the passage is attributable to one or more pieces of evidence (Figure 11). In the interface, the passage of text and the context in which it was generated is shown on the left, and each piece of evidence is shown on the right. You will use all three (context, passage, evidence) to answer the following question for each sentence in the passage: *Is all of the information provided by* this sentence fully supported by at least one piece of evidence? Determining the information provided by the sentence. Three points are key when determining information provided by the sentence: 1. The context and the other sentences of the passage are often critical in understanding the information provided by the sentence. 2. The context should only be used to understand the information provided by the sentence. 3. The evidence should be completely ignored for this step. Consider the following example: Context: who plays doug williams in days of our lives ![16_image_0.png](16_image_0.png) Figure 11: Screenshot of interface to annotate attribution at the sentence level. annotators were asked to mark sentences as being fully attributable or not fully attributable by clicking each sentence, and rating each piece of evidence as being useful or not in helping determine attribution of the passage. Annotators were also presented with the context of the generation. ![16_image_1.png](16_image_1.png) Somewhat similar Passage A is about the same topic as Passage B, but differs substantially level of detail or style of presentation. They may differ in factual details. Very similar Passage A is about the same topic as Passage B, with a similar level of detail and style of presentation. They may differ in factual details. ![16_image_2.png](16_image_2.png) Figure 12: Screenshot of the preservation interface. Annotators are asked to read compare two passages and rate how similar the intent conveyed by the two passages is. Passage: In the American daytime drama Days of Our Lives, Doug Williams and Julie Williams are portrayed by Bill Hayes and Susan Seaforth Hayes. In the above example, the meaning of the passage is clear even without seeing the query. But consider another example: Context: who plays doug williams in days of our lives Passage: he is played by Bill Hayes Passage (interpreted): Doug Williams is played by Bill Hayes in days of our lives In this case the pronoun "he" depends on the context, but it is clear that the intended meaning of the passage can be reasonably interpreted as "Doug Williams is played by Bill Hayes in days of our lives". This interpretation is the "information provided by the passage". Pronouns such as he/she/it/they etc. are one case where context is needed to figure out the intended meaning of the system response. Here's another example (given with paraphrases of the information highlighted below): Context: when is the last time the us lost basketball at the olympics Passage: The last time they lost was in 2004, when Argentina defeated the US 89–79. Most recently, they won gold in 2016. Passage (interpreted): The last time the United States lost basketball at the Olympics was in 2004. The context should only be used to determine the information provided by the passage; at times, the passage may be about a slightly different topic than the context, for example: Context: the south west wind blows across nigeria between Passage: The Harmattan is a dry and dusty northeasterly trade wind that blows across West Africa from December to March. It is very dusty because it blows across the Sahara. Here, the passage talks about a *northeasterly* wind, while the context asks about a *south-west* wind, but the passage can be fully understood. In general, use your best judgment to determine the information provided by the passage. If the passage is hard to understand and you are unsure what the intended meaning of the passage is, *mark* the sentences as not attributed and enter a comment with an explanation. As one example, take the following: Context: how many NBA championships did Michael Jordan win? Passage: it is the best team in the NBA Determining if the information accurately represents the evidence. Two points are key when determining whether the information accurately represents the evidence: When interpreting a piece of evidence, use only the title and text of that specific evidence. Completely ignore the context, passage and all other evidence. Check all the information in a sentence. If only some information is supported by the evidence, mark the sentence as not fully attributable. Consider the following example: Context: when did reba mcentire record ![17_image_0.png](17_image_0.png) back to god ## Passage: Back To God Was Released By Mcentire In 2017. Evidence: "Back to God" is a song performed by American singer, Reba McEntire. It was released as the second single from her 2017 album, Sing it Now: Songs of Faith & Hope, on January 20, 2017. In the above example, it is reasonable to conclude that the evidence supports all the information in the passage, and we can mark the passage as being fully attributable. But consider another example: Context: who won the womens 2017 ncaa basketball tournament Passage: South Carolina Gamecocks won the 2017 NCAA Women's Division I Basketball Tournament. Evidence: The South Carolina Gamecocks defeated the Mississippi State Bulldogs, 67–55, to claim their first-ever national championship. In this case, while the evidence also mentions the "South Carolina Gamecocks", it isn't clear that the national championship being mentioned is indeed the 2017 NCAA Women's Division I Basketball Tournament. The passage should be marked as not attributable. Finally, when the passage contains multiple sentences, evaluate whether each sentence can be fully attributed to one or more pieces of evidence—it is possible for one sentence to be attributed while another is not. For example: Context: who won the womens 2017 ncaa basketball tournament Passage: South Carolina Gamecocks won the 2017 NCAA Women's Division I Basketball Tournament. The final score is 67-55. The championship game was held in Dallas, Texas. Evidence 1: The South Carolina Gamecocks defeated the Mississippi State Bulldogs, 67–55, to claim their first-ever national championship. Evidence 2: The 2017 NCAA Women's Division I Basketball Tournament was played from Friday, March 17 to Sunday, April 2, 2017, with the Final Four played at the American Airlines Center in Dallas, Texas on March 31 and April 2. The first two sentences cannot be attributed to either evidence for the same reason as the previous example, but the last sentence is fully supported by Evidence 2 and should be marked as attributed. In general, you should use your best judgment in determining whether all of the information provided by the passage is "an accurate representation of information in at least one evidence". See Table 8 for additional examples. We give the following final notes of guidance: - **Marking evidence as useful.** When reviewing each piece of evidence, mark it as useful if it helps you judge the attributability of any sentence, and mark it not useful if not. In the above example Evidence 1 is not useful because it didn't contain enough context to actually help you assess if the passage was attributable, but Evidence 2 was useful. - **Contradicting evidence.** Mark a sentence as being attributed if any piece of evidence supports it: if two pieces of evidence contradict each other, but one of them supports the passage, mark the sentence as fully attributable. - **More on the concept of "accurate representation".** We take as inspiration the journalist's conception of "accurate representation". For example, take this excerpt on Accuracy in the NPR Ethics Handbook: "When quoting or paraphrasing anyone . . . consider whether the source would agree with the interpretation..." In other words, if you had written the source document, consider whether you would view the system response as an accurate representation of information in that source document. ## C.3 Instructions: Intent Similarity In this step, you will evaluate how much similar the passage is to another passage (Figure 12). In the interface, the passage A and passage B are both text generated by a system—given the same context in which it was generated. You will use all three (context, passage A, passage B) to answer the following question: How similar is the intent expressed by Passage A and Passage B? Please ignore any differences in details. Two points are key when determining whether the two passages convey the same intent: 1. Judge the similarity solely based on the similarity in the type and quantity of information provided by each passage. 2. Ignore any differences in factual details between the two passages. Consider the following examples: Context: who pays medical bills in great britain where does the money come from to pay these bills Passage A: Britain's National Health Service (NHS) is paid for through general taxation and national insurance. In 2017/18, the NHS budget was £176.5bn. Passage B: Britain's National Health Service (NHS) is paid for through general taxation. In 2017/18, the NHS budget was £118bn. Rating: Very similar. Passage A is about the same topic as Passage B, with a similar level of detail and style of presentation. They may differ in factual details. The above example should be rated "very similar" because both passages include information about (1) how the NHS is paid for, and (2) what its budget | Context + Passage | Evidences | Notes | | | | |----------------------------------------------------------|------------------|---------|----------|----|----------| | Context: who played morticia in the addams family tv show The Addams Family is an American animated sitcom TV series. It was first aired on NBC on September 24, 1973. Carolyn Jones played the role of Morticia. | 1/ The Addams Family (1973 TV series): The Addams Family is an American animated sitcom adaptation of the Charles Addams comic. The series was produced in 1973 and was rebroadcast the following season. 2/ The Addams Family (TV Series 1964–1966): When The Addams Family went off the air in 1966, network executives in charge of children's programming for NBC brought them back in 1973 for their own Saturday Morning cartoon show featuring the voices of Carolyn Jones from the original series. | While the evidence supports the show being aired in 1973, it doesn't specify the exact date (September 24, 1973). Similarly, while the evidence mentions Carolyn Jones as being a voice actor, it doesn't say she played the role of Mortica. | | | | | Context: | when will the la | | | | | | sagrada familia be finished The La Sagrada Familia is a large Roman Catholic church in Barcelona. It is designed by Antoni Gaudi. It started construction in 1882, and the construction is still going on. The estimated date to finish is 2026. | 1/ Sagrada Família - Wikipedia: The Basílica i Temple Expiatori de la Sagrada Família is a church in the Eixample district of Barcelona, Catalonia, Spain, and is currently the largest unfinished Roman Catholic church. 2/ Find Out Sagrada Familia's Expected Finish Date: Visiting the breathtaking Sagrada Familia today also means witnessing the slow progress towards the completion of the project. Sagrada Familia is now expected to be completed in 2026, the centenary of Gaudi's death. It's a reasonable inference that La Sagrada Familia is the same as Sagrada Familia, even though the names differ slightly. | While | Evidence | 2 | mentions | | Gaudi, it isn't clear this is a reference to Antoni Gaudi and further doesn't say that he designed the church. | | | | | | | Table 8: Additional examples for annotating attribution. | | | | | | in 2017/18 was, though they differ in their actual answers to these questions. Context: who is the owner of reading football club Passage A: Reading's owner is Yongge Dai. Yongge Dai is also the president of Chinese company Dai Yongge Real Estate. Yongge's son, Dai Xiu Li, is Reading's vice-president. Passage B: Reading's owner is Dai Yongge. Yongge's brother and sister pair behind the Reading FC takeover—Dai Yongge and Dai Xiu Li—has made their fortune through a massive property empire. Mr Dai, has been the chairman of Renhe Commercial since 1999, which is an organisation owned by his sister behind a vast network of underground shopping centres in China. Rating: Somewhat similar. Passage A is about the same topic as Passage B, but differs substantially in level of detail or style of presentation. They may differ in factual details. The above example should be rated "somewhat similar" because both passages are still about the same topic—Reading's owner— but differ substantially in the information they discuss: Passage A includes information about (1a) who Reading's owner is, (2a) which company they are the president of and (3a) who their vice-president is. In contrast, while Passage B shares information about (1a), it also includes information about (2b) how the Reading owner made their fortune, (3b) their company position and how long they held it for and (4b) what the company also owns. Context: what is the numbers of total elected member of indian parliment in present time Passage A: The total number of elected members of the Lok Sabha is 543. Passage B: The total number of elected members of the Rajya Sabha is 238. Rating: Not at all similar. Passage A is about a significantly different topic than Passage B. Even though the passages look very similar, the above example should be rated "not at all similar" because the two passages are about significantly different topics: "the Lok Sabha" vs "the Rajya Sabha". ## D Details On The Model Few-shot prompting with LLMs We implement many sub-tasks within RARR using *fewshot prompting* of LLMs (also known as in-context learning (Brown et al., 2020)) as follows: 1. For each sub-task, we manually author a small number of training examples: (inputj, outputj) for j = 1*, . . . , J*, where J ranges between 5 and 10 and where both the input and output are strings. 2. We form the following prompt: input1 ⋄ output1⊕input2 ⋄output2⊕*. . .*⊕inputJ ⋄ outputJ ⊕ new_input, where ⋄ denotes a newline character and ⊕ denotes a double newline character. 3. To perform inference on a new input, we condition the LLM on the prompt and sample continuations of the prompt up until the next double newline character. All of our prompts are included in Figures 13, 14, and 15. The contextual version used for QReCC are in Figures 16, 17, and 18. Model statistics We implemented most parts of RARR with the PALM model which has 540B parameters. We prompted PALM without any training or finetuning. We used a TPU v2-128 to run inference with PALM. We manually wrote our prompts by eye-balling quality on a dozen of examples from a separate validation set. We tune our hyperparameters on the validation set as well. We used sampling temperature 0.7 for all generation tasks. For each input text, we sample 3 question generations, and for each question we retrieve 5 results. For agreement gate and editing, we only sample 1 generation. We reject an editing if the edit distance is more than 50 characters or more than half of the original text length. ## E Details On The Dataset As explained in Section 5.1, we generated 150 development and 150 test passages for each of the 6 combinations of dataset and model: (NQ, PaLM), (SQA, PaLM), (QReCC, LaMDA), (NQ, GPT-3), (SQA, GPT-3), (QReCC, GPT-3). Figures 19, 20, 21, and 22 are the few-shot prompts used to generate the passages. Following the corresponding datasets, all generated passages are in English. The authors have manually looked through most of the data and found no personal identifiers. ![21_image_0.png](21_image_0.png) 2 9 13 18 24 28 33 36 _____ 1 [web] I will check some things you said. ![22_image_1.png](22_image_1.png) 2 3 (1) You said: Your nose switches back and forth between nostrils. When you sleep, you switch about every 45 minutes. This ![22_image_0.png](22_image_0.png) 5 I found this article: Although we don't usually notice it, during the nasal cycle one nostril becomes congested and thus contributes less to airflow, while the other becomes decongested. On average, the congestion pattern switches about every 7 This disagrees with what you said. 8 9 (2) You said: The Little House books were written by Laura Ingalls Wilder. The books were published by HarperCollins. 10 I checked: Who published the Little House books? 11 I found this article: These are the books that started it all - the stories that captured the hearts and imaginations of children and young adults worldwide. Written by Laura Ingalls Wilder and published by HarperCollins, these beloved books remain a favorite to this day. 12 The Little House books were published by HarperCollins. 13 This agrees with what you said. 14 15 (3) You said: The Stanford Prison Experiment was conducted in the basement of Jordan Hall, Stanford's psychology building. 16 I checked: Where was Stanford Prison Experiment conducted? 17 I found this article: Carried out August 15-21, 1971 in the basement of Jordan Hall, the Stanford Prison Experiment set out to examine the psychological effects of authority and powerlessness in a prison environment. 18 The Stanford Prison Experiment was conducted in Jordan Hall. 19 This agrees with what you said. 20 21 (4) You said: Social work is a profession that is based in the philosophical tradition of humanism. It is an intellectual discipline that has its roots in the 1800s. 22 I checked: When did social work have its roots? 23 I found this article: The Emergence and Growth of the Social work Profession<br><br> Social work's roots were planted in the 1880s, when charity organization societies (COS) were created to organize municipal voluntary relief associations and settlement houses were established. 24 Social work has its roots in the 1880s, not 1800s. 25 This disagrees with what you said. 26 27 (5) You said: The Havel-Hakimi algorithm is an algorithm for converting the adjacency matrix of a graph into its adjacency list. It is named after Vaclav Havel and Samih Hakimi. 28 I checked: What is the Havel-Hakimi algorithm? 29 I found this article: The Havel-Hakimi algorithm constructs a special solution if a simple graph for the given degree sequence exists, or proves that one cannot find a positive answer. This construction is based on a recursive algorithm. The algorithm was published by Havel (1955), and later by Hakimi (1962). 30 Havel-Hakimi algorithm is for constructing a special solution if a simple graph for the given degree sequence exists, or proving that one cannot find a positive answer, not converting the adjacency matrix of a graph into its adjacency list. 31 This disagrees with what you said. 32 33 (6) You said: "Time of My Life" is a song by American singer-songwriter Bill Medley from the soundtrack of the 1987 film Dirty Dancing. The song was produced by Michael Lloyd. 34 I checked: Who was the producer of "(I've Had) The Time of My Life"? 35 I found this article: On September 8, 2010, the original demo of this song, along with a remix by producer Michael Lloyd, was released as digital files in an effort to raise money for the Patrick Swayze Pancreas Cancer Resarch Foundation at Stanford University. 36 "Time of My Life" was produced by Michael Lloyd. 37 This agrees with what you said. 38 39 (7) You said: Kelvin Hopins was suspended from the Labor Party because he had allegedly sexually harassed and behaved inappropriately towards a Labour Party activist, Ava Etemadzadeh. 40 I checked: Why was Kelvin Hopins suspeneded from the Labor Party? 41 I found this article: A former Labour MP has left the party before an inquiry into sexual harassment allegations against him was able to be concluded, the party has confirmed. Kelvin Hopkins was accused in 2017 of inappropriate physical contact and was suspended by the Labour party pending an investigation.This agrees with what you said. 42 Kelvin Hopins was suspended because he had allegedly sexually harassed and behaved inappropriately towards a Labour Party activist, Ava Etemadzadeh. 43 This agrees with what you said. 44 45 (8) You said: In the battles of Lexington and Concord, the British side was led by General Thomas Smith. 46 I checked: Who led the British side in the battle of Lexington and Concord? 47 I found this article: Interesting Facts about the Battles of Lexington and Concord. The British were led by Lieutenant Colonel Francis Smith. There were 700 British regulars. 48 The British side was led by Lieutenant Colonel Francis Smith, not General Thomas Hall. 49 This disagrees with what you said. 50 51 (9) You said: **{text}** 52 I checked: **{query}** 53 I found this article: **{evidence}** 54 _____ ![22_image_2.png](22_image_2.png) ![23_image_0.png](23_image_0.png) 2 8 14 20 26 ![23_image_1.png](23_image_1.png) 28 I checked: What is the Havel-Hakimi algorithm? 32 ![23_image_2.png](23_image_2.png) 38 39 (7) You said: Phoenix Market City Pune is located on 21 acres of prime property in Pune. It is spread across four levels with approximately 1.4 million square feet of built-up space. The mall is owned and operated by Phoenix Mills Limited. 40 I checked: What is the area of Phoenix Market City in Pune? 41 I found this article: Phoenix Market City was opened in January 2013 and has the distinction of being the largest mall in the city of Pune, with the area of 3.4 million square feet. It is located in the Viman Nagar area of Pune. 42 This suggests the 1.4 million square feet of built-up space in your statment is wrong. 43 My fix: Phoenix Market City Pune is located on 21 acres of prime property in Pune. It is spread across four levels with approximately 3.4 million square feet of built-up space. The mall is owned and operated by Phoenix Mills Limited. 44 45 (8) You said: **{text}** 46 I checked: **{query}** 47 I found this article: **{evidence}** 48 This suggests _____ Figure 16: Contextual version of the query generation prompt. The prompt works well for dialog contexts from QReCC even though the few-shot examples are not formatted as such. ![24_image_0.png](24_image_0.png) 2 9 15 20 26 32 41 ![24_image_1.png](24_image_1.png) 45 _____ Figure 17: Contextual version of the agreement model prompt. 1 [web] I will check some things you said. ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) ![25_image_2.png](25_image_2.png) ![25_image_3.png](25_image_3.png) 2 3 (1) Context: Your nose switches back and forth between nostrils. It's called the nasal cycle. This is to prevent a buildup of mucus. 4 You said: When you sleep, you switch about every 45 minutes. 5 I checked: How often do your nostrils switch? 6 I found this article: Although we don't usually notice it, during the nasal cycle one nostril becomes congested and thus contributes less to airflow, while the other becomes decongested. On average, the congestion pattern switches about every 2 hours, according to a small 2016 study published in the journal PLOS One. 7 Your nose's switching time is about every 2 hours, not 45 minutes. 8 This disagrees with what you said. 9 10 (2) Context: The Little House books is a series of American children's novels. 11 You said: The books were published by HarperCollins. 12 I checked: Who published the Little House books? 13 I found this article: These are the books that started it all - the stories that captured the hearts and imaginations of children and young adults worldwide. Written by Laura Ingalls Wilder and published by HarperCollins, these beloved books remain a favorite to this day. 14 The Little House books were published by HarperCollins. 15 This agrees with what you said. 16 17 (3) Context: The Stanford Prison Experiment is a psychological study to observe the behaviors of conflict and violence that happen between inmates and prisoners in real prisons. 18 You said: It was conducted in the basement of Jordan Hall, Stanford's psychology building. 19 I checked: Where was Stanford Prison Experiment conducted? 20 I found this article: Carried out August 15-21, 1971 in the basement of Jordan Hall, the Stanford Prison Experiment set out to examine the psychological effects of authority and powerlessness in a prison environment. 21 The Stanford Prison Experiment was conducted in Jordan Hall. 22 This agrees with what you said. 23 24 (4) Context: Social work is a profession that is based in the philosophical tradition of humanism. 25 You said: It is an intellectual discipline that has its roots in the 1800s. 26 I checked: When did social work have its roots? 27 I found this article: The Emergence and Growth of the Social work Profession<br><br> Social work's roots were planted in the 1880s, when charity organization societies (COS) were created to organize municipal voluntary relief associations and settlement houses were established. 28 Social work has its roots in the 1880s, not 1800s. 29 This disagrees with what you said. 30 31 (5) Context: The Havel-Hakimi algorithm is named after Vaclav Havel and Samih Hakimi. 32 You said: It is an algorithm for converting the adjacency matrix of a graph into its adjacency list. 33 I checked: What is the Havel-Hakimi algorithm? 34 I found this article: The Havel-Hakimi algorithm constructs a special solution if a simple graph for the given degree sequence exists, or proves that one cannot find a positive answer. This construction is based on a recursive algorithm. The algorithm was published by Havel (1955), and later by Hakimi (1962). 35 Havel-Hakimi algorithm is for constructing a special solution if a simple graph for the given degree sequence exists, or proving that one cannot find a positive answer, not converting the adjacency matrix of a graph into its adjacency list. 36 This disagrees with what you said. 37 38 (6) Context: "Time of My Life" is a song by American singer-songwriter Bill Medley from the soundtrack of the 1987 film Dirty Dancing. 39 You said: The song was produced by Michael Lloyd in the same year. 40 I checked: Who was the producer of "(I've Had) The Time of My Life"? 41 I found this article: On September 8, 2010, the original demo of this song, along with a remix by producer Michael Lloyd, was released as digital files in an effort to raise money for the Patrick Swayze Pancreas Cancer Resarch Foundation at Stanford University. 42 The song "Time of My Life" was produced by Michael Lloyd. 43 This agrees with what you said. 44 45 (7) Context: Super Mario Sunshine was released on GameCube in 2002. In the game, Mario uses a tool strapped to his back called FLUDD. 46 You said: FLUDD stands for Functional Language in a Unified Design Discipline. It can be used to spray water at objects or enemies. This allows Mario to change his movements, kill enemies, or clean up hazards on the floor. 47 I checked: What does FLUDD stands for in Super Mario Sunshine? 48 I found this article: The Flash Liquidizer Ultra Dousing Device, abbreviated and better known as FLUDD or F.L.U.D.D., is a multipurpose water pack from Super Mario Sunshine invented by Professor Elvin Gadd, indicated by the Gadd Science, Incorporated logo at the base of its nozzle exclusively during the cutscene at Pinna Park. 49 In Super Mario Sunshine, FLUDD stands for the Flash Liquidizer Ultra Dousing Device, not Functional Language in a Unified Design Discipline. 50 This disagrees with what you said. 51 52 (8) Context: **{context}** 53 You said: **{text}** 54 I checked: **{query}** 55 I found this article: **{evidence}** 56 _____ Figure 18: Contextual version of the revision model prompt. ![26_image_0.png](26_image_0.png) ![26_image_3.png](26_image_3.png) ![26_image_4.png](26_image_4.png) ![26_image_1.png](26_image_1.png) ![26_image_2.png](26_image_2.png) 1 [web] I will fix some things you said. 2 3 (1) Context: Your nose switches back and forth between nostrils. It's called the nasal cycle. This is to prevent a buildup of mucus. 4 You said: When you sleep, you switch about every 45 minutes. 5 I checked: How often do your nostrils switch? 6 I found this article: Although we don't usually notice it, during the nasal cycle one nostril becomes congested and thus contributes less to airflow, while the other becomes decongested. On average, the congestion pattern switches about every 2 hours, according to a small 2016 study published in the journal PLOS One. 7 This suggests 45 minutes switch time in your statement is wrong. 8 My fix: When you sleep, you switch about every 2 hours. 9 10 (2) Context: The Little House books is a series of American children's novels. 11 You said: The books were published by Amberjack Publishing. 12 I checked: Who published the Little House books? 13 I found this article: These are the books that started it all - the stories that captured the hearts and imaginations of children and young adults worldwide. Written by Laura Ingalls Wilder and published by HarperCollins, these beloved books remain a favorite to this day. 14 This suggests Amberjack Publishing in your statement is wrong. 15 My fix: The books were published by HarperCollins. 16 17 (3) Context: The Stanford Prison Experiment is a psychological study to observe the behaviors of conflict and violence that happen between inmates and prisoners in real prisons. 18 You said: It was conducted in the basement of Encina Hall, Stanford's psychology building. 19 I checked: where was Stanford Prison Experiment conducted. 20 I found this article: Carried out August 15-21, 1971 in the basement of Jordan Hall, the Stanford Prison Experiment set out to examine the psychological effects of authority and powerlessness in a prison environment. 21 This suggests Encina Hall in your statement is wrong. 22 My fix: It was conducted in the basement of Jordan Hall, Stanford's psychology building. 23 24 (4) Context: The Havel-Hakimi algorithm is named after Vaclav Havel and Samih Hakimi. 25 You said: It is an algorithm for converting the adjacency matrix of a graph into its adjacency list. 26 I checked: What is the Havel-Hakimi algorithm? 27 I found this article: The Havel-Hakimi algorithm constructs a special solution if a simple graph for the given degree sequence exists, or proves that one cannot find a positive answer. This construction is based on a recursive algorithm. The algorithm was published by Havel (1955), and later by Hakimi (1962). 28 This suggests the Havel-Hakimi algorithm's functionality in your statement is wrong. 29 My fix: It constructs a special solution if a simple graph for the given degree sequence exists, or proves that one cannot find a positive answer. 30 31 (5) Context: "Time of My Life" is a song by American singer-songwriter Bill Medley from the soundtrack of the 1987 film Dirty Dancing. 32 You said: The song was produced by Phil Ramone in the same year. 33 I checked: Who was the producer of "(I've Had) The Time of My Life"? 34 I found this article: On September 8, 2010, the original demo of this song, along with a remix by producer Michael Lloyd, was released as digital files in an effort to raise money for the Patrick Swayze Pancreas Cancer Resarch Foundation at Stanford University. 35 This suggests "Time of My Life" producer name in your statement is wrong. 36 My fix: The song was produced by Michael Lloyd in the same year. 37 38 (6) Context: Phoenix Market City Pune is located on 21 acres of prime property in Pune. The mall is owned and operated by Phoenix Mills Limited. 39 You said: It is spread across four levels with approximately 1.4 million square feet of built-up space. 40 I checked: What is the area of Phoenix Market City in Pune? 41 I found this article: Phoenix Market City was opened in January 2013 and has the distinction of being the largest mall in the city of Pune, with the area of 3.4 million square feet. It is located in the Viman Nagar area of Pune. 42 This suggests the 1.4 million square feet of built-up space in your statment is wrong. 43 My fix: It is spread across four levels with approximately 3.4 million square feet of built-up space. 44 45 (7) Context: **{context}** 46 You said: **{text}** 47 I checked: **{query}** 48 I found this article: **{evidence}** 49 This suggests _____ Figure 19: The PaLM prompt for generating long-form answers to questions from NQ and SQA. ![27_image_5.png](27_image_5.png) 1 [web] I will think step by step and answer your question. 2 ![27_image_0.png](27_image_0.png) 5 Answer: Yes 6 ![27_image_1.png](27_image_1.png) 10 14 18 22 26 ![27_image_2.png](27_image_2.png) 30 ![27_image_3.png](27_image_3.png) 34 ![27_image_4.png](27_image_4.png) 37 Answer: 5 38 39 Question: **{question}** 40 Explanation: _____ 1 I will think step by step and answer your question. 2 3 1. Question: Is growing seedless cucumber good for a gardener with entomophobia? 4 2. Explanation: Entomophobia is a fear of insects. Plants need insects to pollinate them. Seedless fruits such as seedless cucumbers do not require pollination so seedless fruits do not require insects. This is good for people with entomophobia. 5 3. Answer: Yes. 6 ![28_image_0.png](28_image_0.png) 9 3. Answer: Winston Churchil and Lord Linlithgow. 10 ![28_image_1.png](28_image_1.png) 13 3. Answer: March 23, 2018. 14 ![28_image_2.png](28_image_2.png) 18 22 ![28_image_3.png](28_image_3.png) 26 ![28_image_4.png](28_image_4.png) 30 ![28_image_5.png](28_image_5.png) 34 ![28_image_6.png](28_image_6.png) 38 ![28_image_7.png](28_image_7.png) 42 44 2. Explanation: _____ Figure 20: The GPT-3 prompt for generating long-form answers to questions from NQ and SQA. ![28_image_8.png](28_image_8.png) 1 Hi, I will think step by step and answer your question. 2 Is growing seedless cucumber good for a gardener with entomophobia? 3 Yes. Entomophobia is a fear of insects. Plants need insects to pollinate them. Seedless cucumber fruit does not require ![29_image_0.png](29_image_0.png) Figure 21: The LaMDA prompt for generating answers to questions from QReCC. Each line is a conversation turn. The dialog context from QReCC contains rounds of questions and answers (Q1, A1, Q2, A2*, . . . , Q*k). 1 I will think step by step and answer your question. 2 ![29_image_1.png](29_image_1.png) 5 8 11 18 19 {Q1} 20 {A1} 21 ... 22 {Qk} 23 _____ ## Acl 2023 Responsible Nlp Checklist A For Every Submission: ✓ A1. Did you describe the limitations of your work? 8 ✓ A2. Did you discuss any potential risks of your work? 8,9 ✓ A3. Do the abstract and introduction summarize the paper's main claims? 1 ✗ A4. Have you used AI writing assistants when working on this paper? Left blank. ## B ✓ **Did You Use Or Create Scientific Artifacts?** 2,3,5 ✓ B1. Did you cite the creators of artifacts you used? 2,3,5 ✓ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? E ✓ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 5 ✓ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? E ✓ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 5,E ✓ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 5 ## C ✓ **Did You Run Computational Experiments?** 5,6 ✓ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 5,D The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. ✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 5,D ✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 5 ✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 5,B,D D ✓ **Did you use human annotators (e.g., crowdworkers) or research with human participants?** 5,C ✓ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? C ✓ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? C ✓ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? C D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank. ✓ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? C
li-etal-2023-cwseg
{CWS}eg: An Efficient and General Approach to {C}hinese Word Segmentation
https://aclanthology.org/2023.acl-industry.1
In this work, we report our efforts in advancing Chinese Word Segmentation for the purpose of rapid deployment in different applications. The pre-trained language model (PLM) based segmentation methods have achieved state-of-the-art (SOTA) performance, whereas this paradigm also poses challenges in the deployment. It includes the balance between performance and cost, segmentation ambiguity due to domain diversity and vague words boundary, and multi-grained segmentation. In this context, we propose a simple yet effective approach, namely CWSeg, to augment PLM-based schemes by developing cohort training and versatile decoding strategies. Extensive experiments on benchmark datasets demonstrate the efficiency and generalization of our approach. The corresponding segmentation system is also implemented for practical usage and the demo is recorded.
# Cwseg: An Efficient And General Approach To Chinese Word Segmentation Dedong Li+ 1, Rui Zhao 1**, Fei Tan**∗ 1 1 SenseTime Research ddlecnu@gmail.com {zhaorui, tanfei}@sensetime.com ## Abstract In this work, we report our efforts in advancing Chinese Word Segmentation for the purpose of rapid deployment in different applications. The pre-trained language model (PLM) based segmentation methods have achieved state-of-the-art (SOTA) performance, whereas this paradigm also poses challenges in the deployment. It includes the balance between performance and cost, segmentation ambiguity due to domain diversity and vague words boundary, and multi-grained segmentation. In this context, we propose a simple yet effective approach, namely CWSeg, to augment PLMbased schemes by developing cohort training and versatile decoding strategies. Extensive experiments on benchmark datasets demonstrate the efficiency and generalization of our approach. The corresponding segmentation system is also implemented for practical usage and the demo is recorded. ## 1 Introduction Chinese word segmentation (CWS) is a preliminary but essential procedure for Chinese language processing tasks, and has been applied in various scenarios (Yang et al., 2018; Zhang et al., 2019; Cui et al., 2020; Han et al., 2020; Zhang et al., 2020; Tan et al., 2020; Lu et al., 2023). Especially for fast complete recall and accurate semantic understanding in search and recommendation scenarios (Bao et al., 2022), CWS is still indispensable. In addition, experiments on Chinese LLaMA and Alpaca show that the token throughput of the model that expands the vocabulary through word segmentation has greatly improved the processing of Chinese text compared with the original model (Cui et al., 2023). Recent deep learning methods have achieved remarkable results on publicly available datasets in this regard (Qiu et al., 2019). Also, the pre-trained language model (PLM) (Liu et al., 2019) further +Work was done at SenseTime Research *Corresponding author emerges as the paramount foundation of text representation for CWS as seen in other tasks (Tian et al., 2020b; Huang et al., 2020a; Maimaiti et al., 2021). Current PLM-based approaches, however, pose three hurdles to the production deployment we need to cross: (1) One dilemma is the trade-off between the model performance and inference speed. (2) The lexical diversity and domain gap also jeopardize the fast deployment of a generic model to customized scenarios. (Maimaiti et al., 2021). (3) PLM-based schemes with single granularity are less likely to meet multi-granularity demands of practical relevance. To tackle these issues, we propose an efficient and general approach to augmenting PLM-based Chinese Word Segmentation methods, namely CWSeg. It can extrapolate to different sequence labeling scenarios. Recent studies showed that small models also have the potential to be comparable to large models (Ba and Caruana, 2014; Zhang et al., 2018). We thus introduce a new cohort training strategy to co-train a cohort of multi-scale model artifacts to meet the performance and real-time demands. Specifically, we employ Wasserstein distance (WD) (Rüschendorf, 1985) to orchestrate distributions of model cohorts to enable more robust learning. In addition, we propose to construct the tailored domain-specific lexicon Trie (Liu et al., 2002) and build up a versatile decoding scheme to augment the optimal segmentation path searching on the fly for diverse practical scenarios. It can flexibly adjust the segmentation granularity and benefit customized domains. In summary, our primary goal is to build a versatile framework for strengthening different models simultaneously and then rapidly deploying them into multiple practical scenarios of CWS, which is fundamentally different from existing research works. Essentially, the output models of this framework can be regarded as complements to, not replacements for, existing SOTA methods. Experimental results on multiple benchmark datasets demonstrate the effectiveness of our approach. Ablation studies confirm the necessity of cohort training strategy and lexicon Trie aided versatile decoding solution. The cross-domain application experiments demonstrate the generalization capacity of our holistic approach. ## 2 Related Work Early work in Chinese word segmentation builds upon the statistical assumption (Li and Sun, 2009; Sun et al., 2012a) by modeling rules into the learning process. Recently, PLMs have been introduced (Tian et al., 2020b,a; Maimaiti et al., 2021) and made significant advances in this regard. Our work, however, aims to alleviate their potential challenges involved in the industrial applications as mentioned in Section 1. Recent works (Huang et al., 2020b, 2021) distill knowledge from the well-trained teacher model into a student model to balance the model scale and performance. However, it requires multiple finetuning rounds and models can't learn from each other collaboratively. In this work, we introduce a cohort training based learning strategy to address these two problems for CWS. Different from the pioneering mutual learning (Zhang et al., 2018) in computer vision, we propose Wasserstein distance to better enable the learning as studied in Sec. 4.3. It's a more carbon-footprint-friendly solution as compared to recent research threads. To mitigate the effects of Chinese lexical diversity, Qiu *et al.* (Qiu et al., 2019) proposed a concise unified model to extract the criterion-aware representation for multi-criteria corpus, which requires training from scratch on the entire corpus for new criteria or domains. Gong *et al.* (Gong et al., 2017, 2020) proposed a multi-grained word segmentation by training with large-scale pseudo labels, which is relatively lagging for rapid deployment to new domains. Our work approaches this issue by a lightweight versatile decoding scheme to sidestep heavy training loads. ## 3 Methodology As shown in Fig. 1 (a), we formulate CWS as a classical sequence labeling problem as with existing compelling schemes. Concretely, given a text sequence of n characters X = {x1*, . . . , x*n}, CWS is to tag involved characters sequentially with the ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) BIO encoding by maximizing their joint probability p(y1*, . . . , y*n|X ) where yi ∈ T = {B,I, O}, short for beginning, inside and outside respectively. ## 3.1 Cohort Training The cohort training strategy enables multiple student models to teach and learn from each other. The objective function contains supervised loss Lc and mimicry loss Lm. As exemplified by two models in Fig. 2, the overall loss function is: $${\mathcal{L}}={\mathcal{L}}_{c_{1}}+{\mathcal{L}}_{c_{2}}+\lambda\cdot{\mathcal{L}}_{m}$$ $\left(1\right)^{2}$ where λ ∈ [0, 1] is a hyper-parameter. Lc1 and Lc2 guide the model learning under the supervision of real segmentation tags while Lm can encourage different models to learn from each other collaboratively. Specifically, Lc1 and Lc2 refer to the cross entropy (CE) loss. Without loss of generality, Lc1 = −PN i=1 P*|T |* t=1 I(yi, t)log(p t1 (xi)) and p t1 (xi) = exp(z t 1 ) P*|T |* t=1 exp(z t 1 ) where I(·) is an indicator function, p t1 (xi) is the prediction probability, z t1 is the output logit of the model F1. For Lm, Kullback-Leibler (KL) divergence is a naive metric to quantify the distance between two distributions KL(p2||p1) = PN i=1 P*|T |* t=1 p t2 (xi) p t 2 (xi) p t 1 (xi) . However, KL divergence is asymmetric and possibly infinite when two distributions are disjoint or there are points such that p1(xi) = 0 and p2(xi) > 0, which is fragile in training (Arjovsky et al., 2017). The symmetric Jensen-Shannon (JS) divergence, suffers from the same problem (See A.1 for more details). Given the above concerns, we introduce the Wasserstein-1 distance (a.k.a. earth mover's distance): $$W(\mathbf{p_{2}},\mathbf{p_{1}})=\inf_{\gamma\in\prod(\mathbf{p_{2}},\mathbf{p_{1}})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\gamma}[||\mathbf{x}-\mathbf{y}||]\tag{2}$$ where Q(p2, p1) is the set of all joint distributions γ(x, y) whose marginals are p2 and p1, respectively. As shown in Appendix A.1, Wasserstein distance can provide a meaningful and smooth representation of the in-between distance for two distributions in lower dimensional manifolds without overlaps. Eq. (2), however, is highly intractable. We thus resort to Kantorovich-Rubinstein duality: $$W(\mathbf{p_{2}},\mathbf{p_{1}})=\sup_{\|f\|\leq1}\mathbb{E}_{\mathbf{x}\sim\mathbf{p_{2}}}[f(\mathbf{x})]-\mathbb{E}_{\mathbf{y}\sim\mathbf{p_{1}}}[f(\mathbf{y})]\tag{3}$$ where the supremum is over all the 1-Lipschitz *function f : R K → R, which maps each Kdimensional feature vector in the semantic space to a real number. In practice, f is implemented as a two-layer feed-forward neural network with parameters Θf clipped to [−*c, c*], where c > 0. Therefore, the mimicry loss Lm can be derived as the dual form of Wasserstein distance: $${\mathcal{L}}_{m}=\operatorname*{max}_{\Theta_{f}}\sum_{(\mathbf{x,y})}[f(\mathbf{x})-f(\mathbf{y})]\qquad(4)$$ Extension to Larger Cohort The cohort training strategy can be easily extended to larger cohorts. For example, given K models (K ≥ 2), the overall loss function L can be formulated as: $$\mathcal{L}=\sum_{i=1}^{K}\mathcal{L}_{c_{i}}+\frac{2\cdot\lambda}{K(K-1)}\sum_{i=1}^{K}\sum_{j=i+1}^{K}W(\mathbf{p}_{j},\mathbf{p}_{i})\tag{5}$$ Obviously, Eq. (1) is a special case of Eq. (5) when K = 2. ## 3.2 Versatile Decoding However, the PLM-based segmentation capacity of single-granularity barely meets diverse real-world *f is 1-Lipschitz ⇔ |f(x) − f(x ′)*| ≤ |*x − x ′| for all x and x ′ ![2_image_0.png](2_image_0.png) applications. As illustrated in Fig. 3 (a), the model tends to decode the input text as "中国 (China) / 科 学技术 (Science and Technology) / 大学 (University)", whereas only the input as a whole "中国科 学技术大学 (University of Science and Technology of China, USTC)" refers to a meaningful entity. Additionally, for large-scale content recommendations, rapidly acquiring as much relevant content as possible is an essential step towards quality candidates on which more sophisticated methods can function. Thus, reasonably splitting the whole entity of "中国科学技术大学 (USTC)" into smaller relevant semantic units "中国 (China) / 科学 (Science) / 技术 (Technology) / 大学 (University)" is crucial in this regard. In this context, we focus on adapting generic models trained on annotated corpora to specific domains and supporting diverse granularity. It includes the construction of lexicon Trie (Liu et al., 2002) and versatile decoding. Lexicon Trie: The lexicon Trie is designed to store vocabulary in a compressed Trie structure and search for each word efficiently. As illustrated in Fig. 3 (b), the solid node denotes the root node, and each circle denotes a Trie node, which contains a value containing a Chinese token and a label representing whether it is a complete word from the root node so far. Here the red circle indicates that the label is equal to True. Thus, given a collected vocabulary set, we can initialize a lexicon Trie. In the matching stage, given an input text such as "中国科学技术大学", we apply the matching algorithm to search for all complete words in the input text that can be matched on the lexicon Trie. The matched word list is shown in Fig. 3 (b). Diverse Modes: The granularity criterion *criteria* is roughly determined by the RouteScore, which is the number of chunks in the segmented path regularized by semantic completeness. In total, we have the following four modes: Normal Mode: High-probability segmentation that conforms to the statistics of the data. Fine Mode: *RouteScore* larger than normal, collecting more semantic units. Coarse Mode: *RouteScore* smaller than normal, perceiving more complete semantics. Index Mode: A segmentation result that combines the above three modes. The whole process can be formulated as Fig. 3 and Algorithm 1 (refer to A.1 for more function details). In addition to the prediction from the finetuned model F, we create a lexicon Trie D from the pre-processed vocabulary set V to capture all candidate phrases C without training. We merge predictions P into candidates set C to construct CWSGraph G, where each node represents a token. Viterbi algorithm is adopted for decoding according to the granularity criteria. In this way, we can flexibly tailor model-based segmentation results to multiple domain-specific scenarios while meeting the multi-granularity requirements. ## Algorithm 1 Versatile Decoding Input: Text sequence X , fine-tuned model F, lexicon Trie D, granularity mode m. Output: Text sequence label: Y. 1: P = F(X ); C = Matching(X , D)|P; 2: G = CWSGraph(C); 3: *borders* = ExtractBorders(P); 4: if m = "normal" **then** 5: Y = P 6: **else if** m = "fine" **then** 7: *cands* = CutBorders(G, *borders*); 8: Y = Viterbi(G, cands, *criteria*m); 9: **else if** m = "coarse" **then** 10: *cands* = LinkBorders(G, *borders*); 11: Y = Viterbi(G, cands, *criteria*m); 12: **else if** m = "index" **then** 13: for m:["normal", "fine", "coarse"] do 14: Y |= VersatileDecoding(X , F, D, m); 15: **end for** 16: **end if** 17: return Y ## 4 Experiments 4.1 Setup Dataset We experiment with six widely-used datasets AS, CityU, CTB6, MSR, PKU, Weibo, from SIGHAN 2005 Bakeoff, Chinese Treebank and NLPCC2016 (SIGHAN2005Bakeoff; Emerson, 2005; Xue et al., 2005; Qiu et al., 2016). The basic statistics and train/dev/test settings are detailed in Table 1. | Corpus | Vocab. | Word Len. | Dataset Size | | | | |----------|----------|-------------|----------------|--------|-------|-------| | Size | 50% | 75% | Train | Dev. | Test | | | AS | 144.5k | 3 | 3 | 698.9k | 10.0k | 14.4k | | CityU | 70.7k | 2 | 3 | 47.7k | 5.3k | 1.5k | | CTB6 | 47.5k | 2 | 3 | 23.4k | 2.0k | 2.7k | | MSR | 90.1k | 3 | 5 | 78.2k | 8.7k | 3.9k | | PKU | 58.1k | 2 | 3 | 17.1k | 1.9k | 1.9k | | Weibo | 56.1k | 2 | 3 | 20.1k | 2.0k | 8.5k | Table 1: The statistics of the datasets. Baselines We select baselines both from traditional methods and the well-executed or SOTA methods, such as Jieba (jieba) (Fast CWS tool based on HMM), HanLP (pyhanlp) (CRF-based method), THU (THULAC) (Perceptron-based method), PKU (PKUSeg) (CRF-based CWS tool uses a new training method, namely, the adaptive online gradient descent method based on feature frequency (Sun et al., 2012b)). Since the major architecture of recent competing methods is CRF on top of Transformers (e.g., BERT and its variants), and as mentioned earlier, our flexible framework CWSeg is a complement to, not a replacement for, existing compelling methods, we experiment with our method on BERT-CRF (refer to A.1 for more details), which can be easily applied to other variants. WMSeg (Tian et al., 2020b), another most recent SOTA method based on this architecture utilizing memory networks to incorporate wordhood information, is also used for comparison. To be noted here, the PLMs implemented in BERTCRF and WMSeg are the BERT base model. Since CWSeg adopts the cohort training strategy, we set base versions of BERT and NEZHA as cohorts. Experiment Settings The PLMs used in this work are readily available, and are the widely recognized SOTA backbones in the Chinese community. Such as 'BERT' for bert-base-chinese (Devlin et al., 2019; bert-base chinese), 'RoBERTa' for chinese_roberta_wwm (Liu et al., 2019; chineseroberta wwm), 'NEZHA' for NEZHA-Base-WWM | (Junqiu Wei, 2019; NEZHA-Base-WWM). They are based on Chinese characters (similar to subwords in English). We choose Adam optimizer (Kingma and Ba, 2014) with an initial learning rate as 2e-5 and tuned amongst {1e-4, 5e-5, 2e-5, 1e-5}. We use the early stopping mechanism (Yao et al., 2007) in the model training. The batch size was tuned amongst {32, 64, 128}. The hyper-parameter λ was set as 0.5 and tuned from [0.01, 1], and the clipping threshold c was set as 0.5 and tuned from [0.1, 0.5]. All experiments were run on Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz and NVIDIA V100-32g GPUs. Note here that all these time-cost comparison experiments are tested on the same CPU device, while deep methods run faster on CUDA devices. | Examples | Modes | Outputs | | | | |-----------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|-----------------|-----------|------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Normal 新型/ 冠状/ 病毒 New Type/ Crown/ Virus Fine 新型/ 冠状/ 病毒 Coarse 新型冠状病毒 COVID-19 | | | | | | | | 新型冠状病毒 COVID-19 | 新型/ 冠状/ 病毒/ 新型冠状病毒/ | | | | | | | Index | 冠状病毒 Coronavirus | | | | | | | Normal 上海/ 中心/ 大厦 Shanghai/ Center/ Building Fine 上海/ 中心/ 大厦 Coarse 上海中心大厦 Shanghai Tower | | | | | | | | 上海中心大厦 Shanghai Tower | 上海/ 中心/ 大厦/ 上海中心/ Shanghai Tower 中心大厦/ Centre 上海中心大厦 | | | | | | | Index Normal 欧洲/ 联盟 Europe/ Union Fine 欧洲/ 联盟 Coarse 欧洲联盟 European Union Index 欧洲/ 联盟/ 欧洲联盟 | | | | | | | | Adapt w/o | Multi | | | | | | | granularity | F1 | Latency | | | | | | Retraining | (s/k) | | | | | | | Jieba | ! | ! | 80.67 | 0.17 | | | | HanLP | ! | ! | 82.34 | 0.33 | | | | THU | ! | - | 88.09 | 0.57 | | | | PKU | - | - | 91.29 | 0.63 | | | | BERT-CRF | - | - | 96.59 | 12.7 | | | | WMSeg | - | - | 97.06 | 14.5 | | | | CWSeg | ! | ! | 97.65 | 12.9 | 欧洲联盟 European Union | Normal 中国/ 科学技术/ 大学 China/ Sci. n Tech/ University Fine 中国/ 科学/ 技术/ 大学 China/ Sci./ Tech/ University Coarse 中国科学技术大学 USTC Index 中国/ 科学技术/ 大学/ 科学/ 技术/ 中国科学技术大学 | | 中国科学技术大学 USTC | | | | | | | | Table 2: | Overall model comparison. | 's/k' refers to | | | | | | seconds spent per thousand requests on the same CPU device. | | | | | | | ## 4.2 Main Results Overall Performance Table 2 reports the overall performance. For the sake of fairness, we utilize a unified model and average F1 scores of six individual test sets (Luo et al., 2019). BERT-CRF stands out as compared to traditional methods due to the powerful representation capacity of the pre-trained language model. Following the PLM paradigm, (Tian et al., 2020b,a) further fuses wordhood information into the network, and achieves better performance compared to BERT-CRF. For simplicity, we set the BERT-CRF architecture as the cohort in our implementation to verify the gain effect of our framework. As shown in Table 2, our approach further advances BERT-CRF with cohort training and versatile decoding without reshaping model architecture, which also defeats the most recent SOTA method WMSeg (Tian et al., 2020b). Multi-grained Segmentation We evaluate CWSeg on four different segmentation modes. As shown in Table 3, compared to the model without versatile decoding, CWSeg can better capture the whole words of the entity. This also illustrates the granularity gap between annotated corpora and the application scenarios. With versatile decoding, CWSeg can generate both fine-grained and coarsegrained labels. And multi-granularity results provide more knowledge and indexing, which is crucial for multiple scenarios such as retrieval, content recommendation, and advertisement. ## 4.3 Ablation Study We investigate the impact of versatile decoding, cohort training, and different losses on CWSeg. Effect of Versatile Decoding Table 4 details the performance gain of our approach in the domain adaption. It enables models to be readily applied to new domains without training. Take MSR for instance, our approach lifts the model performance by a large margin of 7%. This is reasonable as MSR has significantly different distributions compared Train Test Methods F1 All w/o AS AS CWSeg **96.91** w/o Versatile 96.88 **(-0.03)** All w/o CityU CityU CWSeg **92.48** w/o Versatile 91.41 **(-1.07)** All w/o CTB6 CTB6 CWSeg **89.21** w/o Versatile 89.17 **(-0.04)** All w/o MSR MSR CWSeg **92.26** w/o Versatile 85.26 **(-7.00)** All w/o PKU PKU CWSeg **92.58** w/o Versatile 90.56 **(-2.02)** All w/o Weibo Weibo CWSeg **87.73** w/o Versatile 86.01 **(-1.72)** to others as shown in Table 1, and thus requires the domain-adaptive decoding strategy. Effect of Cohort Training Overall, the cohort training outperforms the classical model distillation approach in terms of small models as evidenced by Net2 (94.84 vs 94.04 and 95.37 vs 94.87) in Table 5. It's worthwhile to note that big models also benefit from the cohort training as compared to the independent training (e.g., Net1: 96.9 vs 96.31 and 97.03 vs 96.83). In this setting, the CH training policy, which is trained only once and converges faster, is about 3 times faster than MD, which requires 3 stages of training (Train Net1, train Net2, Net1 distills Net2). Effect of Cohort Settings To study the effect of the cohort settings, we conducted a detailed analysis. As shown in Table 6, we can easily find that: (1) The cohort setting stands out in all trials, and the small model improves more significantly. (2) Larger models improve small models better. (3) Diversity in cohort settings promotes performance. | PLM Settings | SN | MD | CH | | | | |----------------|---------|-------|-------|-------|-------|-------| | Net1 | Net2 | Net1 | Net2 | Net2 | Net1 | Net2 | | BERT-4 | BERT-1 | 96.31 | 93.85 | 94.04 | 96.9 | 94.84 | | NEZHA-4 | NEZHA-1 | 96.83 | 94.39 | 94.87 | 97.03 | 95.37 | ![5_image_2.png](5_image_2.png) Effect of Wasserstein Distance For the cohort training, we further study the impact of mimicry loss. Specifically, we compare WD with KL and ![5_image_0.png](5_image_0.png) (c) Cohort training settings with different backbones. Table 6: The effect of cohort setting experiments. ![5_image_1.png](5_image_1.png) Table 7: The effect of Wasserstein distance loss. 'WD' for Wasserstein distance. JS as detailed in Table 7 and Fig. 4. WD is slightly better than both KL and JS in large part due to the performance ceiling, whereas it can significantly accelerate cohort training by multiple folds. This is appealing, especially for multiple large-scale model learning. ## 4.4 Trade-Off Between Performance And Speed We experiment with cohort training (CH) of BERT1, BERT-4, BERT-8, and BERT-12. As a comparison, these 4 single networks (SN) are also finetuned independently. The latency for CH and SN is the same, and the units of latency are defined in Section 4.2. As shown in Fig. 5, overall, CH produces a batch of different model artifacts simultaneously as designed, which outperforms counterparts of SN without inference latency penalty. For example, CH-4 setting has almost the same segmentation performance as SN-12. These artifacts can serve different inference scenarios. Specifically, CH-1 can be used for real-time demanding applications and CH-12 works well on the offline inference scenarios with more tolerance of latency. ## 5 Discussion Our latency comparisons are benchmarked on the same CPU device, while deep methods run faster on CUDA devices. Besides, we can resort to a fastcompiling language (e.g., C++) backed platform ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) or tailored toolchain (e.g., ONNX) to optimize the serving speed. How to apply diversity modes to different scenarios? Generally speaking, the coarse mode is to perceive complete semantics, and the fine mode is to perceive more extensive concepts. For example, in the scenarios of search and recommendation, the normal or coarse mode is employed to process web pages to build inverted indexes. Index mode is often used for query expansion, where we disassemble queries into multiple granularities to maximize recall of relevant documents. ## 6 Conclusion In this work, we develop an efficient and general framework, CWSeg, which enables the state-of-theart schemes of Chinese word segmentation better prepared for industrial deployment scenarios. We present Wasserstein distance-based cohort learning method and versatile decoding to facilitate the trade-off between segmentation performance and serving latency as well as the fast cross-domain adaption. Comprehensive experiments are performed to justify the efficiency and generalization of CWSeg. We believe that our work can be extrapolated to other sequence labeling problems straightforwardly. ## Limitations This study has potential limitations. When the CWSeg model is applied to a new domain, we assume that words and phrases solely related to the domain are available. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In *International conference on machine learning*, pages 214–223. PMLR. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? *Advances in neural information* processing systems, 27. Fuguang Bao, Wenqian Xu, Yao Feng, and Chonghuan Xu. 2022. A topic-rank recommendation model based on microblog topic relevance & user preference analysis. *Hum.-Cent. Comput. Inf. Sci*, 12(10). bert-base chinese. bert-base-chinese. https:// huggingface.co/bert-base-chinese. chinese-roberta wwm. chinese-roberta-wwm. https: //github.com/ymcui/Chinese-BERT-wwm. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for chinese natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 657–668. Yiming Cui, Ziqing Yang, and Xin Yao. 2023. Efficient and effective text encoding for chinese llama and alpaca. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. Chen Gong, Zhenghua Li, Min Zhang, and Xinzhou Jiang. 2017. Multi-grained chinese word segmentation. In *Proceedings of the 2017 Conference on* Empirical Methods in Natural Language Processing, pages 692–703. Chen Gong, Zhenghua Li, Bowei Zou, and Min Zhang. 2020. Multi-grained chinese word segmentation with weakly labeled data. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 2026–2036. Xuehua Han, Juanle Wang, Min Zhang, and Xiaojie Wang. 2020. Using social media to mine and analyze public opinion related to covid-19 in china. International Journal of Environmental Research and Public Health, 17(8):2788. Kaiyu Huang, Degen Huang, Zhuang Liu, and Fengran Mo. 2020a. A joint multiple criteria model in transfer learning for cross-domain chinese word segmentation. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing (EMNLP), pages 3873–3882. Kaiyu Huang, Junpeng Liu, Degen Huang, Deyi Xiong, Zhuang Liu, and Jinsong Su. 2021. Enhancing chinese word segmentation via pseudo labels for practicability. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021*, pages 4369–4381. Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2020b. Towards fast and accurate neural chinese word segmentation with multicriteria learning. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 2062–2072. jieba. jieba. https://github.com/fxsjy/jieba. Xiaoguang Li Wenyong Huang Yi Liao Yasheng Wang Jiashu Lin Xin Jiang Xiao Chen Qun Liu Junqiu Wei, Xiaozhe Ren. 2019. Nezha: Neural contextualized representation for chinese language understanding. arXiv preprint arXiv:1909.00204. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Computational Linguistics, 35(4):505–512. Cheng-Lin Liu, Masashi Koga, and Hiromichi Fujisawa. 2002. Lexicon-driven segmentation and recognition of handwritten character strings for japanese address reading. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(11):1425–1437. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Jinghui Lu, Rui Zhao, Brian Mac Namee, and Fei Tan. 2023. Punifiedner: A prompting-based unified ner system for diverse datasets. Ruixuan Luo, Jingjing Xu, Yi Zhang, Xuancheng Ren, and Xu Sun. 2019. Pkuseg: A toolkit for multidomain chinese word segmentation. arXiv preprint arXiv:1906.11455. Mieradilijiang Maimaiti, Yang Liu, Yuanhang Zheng, Gang Chen, Kaiyu Huang, Ji Zhang, Huanbo Luan, and Maosong Sun. 2021. Segment, mask, and predict: Augmenting chinese word segmentation with self-supervision. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2068–2077. NEZHA-Base-WWM. Nezha-base-wwm. https://github.com/huawei-noah/ Pretrained-Language-Model. PKUSeg. Pkuseg. https://github.com/lancopku/ PKUSeg-python. pyhanlp. pyhanlp. https://github.com/hankcs/ pyhanlp. Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2019. A concise model for multi-criteria chinese word segmentation with transformer encoder. arXiv preprint arXiv:1906.12035. Xipeng Qiu, Peng Qian, and Zhan Shi. 2016. Overview of the NLPCC-ICCPOL 2016 shared task: Chinese word segmentation for micro-blog texts. In Proceedings of The Fifth Conference on Natural Language Processing and Chinese Computing & The Twenty Fourth International Conference on Computer Processing of Oriental Languages. Ludger Rüschendorf. 1985. The wasserstein distance and approximation theorems. Probability Theory and Related Fields, 70(1):117–129. SIGHAN2005Bakeoff. Sighan2005bakeoff. http:// sighan.cs.uchicago.edu/bakeoff2005/. Xu Sun, Houfeng Wang, and Wenjie Li. 2012a. Fast online training with frequency-adaptive learning rates for Chinese word segmentation and new word detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 253–262, Jeju Island, Korea. Association for Computational Linguistics. Xu Sun, Houfeng Wang, and Wenjie Li. 2012b. Fast online training with frequency-adaptive learning rates for chinese word segmentation and new word detection. In *Proceedings of the 50th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 253–262. Fei Tan, Yifan Hu, Changwei Hu, Keqian Li, and Kevin Yen. 2020. TNT: Text normalization based pretraining of transformers for content moderation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4735–4741, Online. Association for Computational Linguistics. THULAC. Thulac. https://github.com/thunlp/ THULAC-Python. Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xiaojun Quan, Tong Zhang, and Yonggang Wang. 2020a. Joint chinese word segmentation and part-of-speech tagging via two-way attentions of auto-analyzed knowledge. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8286–8296. Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020b. Improving chinese word segmentation with wordhood memory networks. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 8274–8285. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. *Natural language engineering*, 11(2):207–238. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Improving neural machine translation with conditional sequence generative adversarial nets. In *Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1346–1355. Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. 2007. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289–315. Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked bert. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 882–890. Ying Zhang, Tao Xiang, Timothy M Hospedales, and Huchuan Lu. 2018. Deep mutual learning. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pages 4320–4328. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441– 1451. ## A Appendix A.1 Model Details Cohort Model We set the SOTA CWS model architecture BERT-CRF as cohort model implementations to exploit the PLM strength and transition patterns of the labeling system. For each character xiis mapped to xi ∈ R de, where de is the embedding size. The PLM encoder extract the contextual features hi ∈ R dh automatically for each character xi by $$[\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{|{\mathcal{X}}|}]=E n c o d e r(\mathbf{X}),\qquad(6)$$ 9 where X ∈ R de*×|X|* is the embedding matrix of X , dh is the size of hidden features. There are several prevalent choices for *Encoder* model, such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019). There are rules in the labeling systems, such as the I can only be after the B label. We thus utilize the conditional random fields (CRF) (Lafferty et al., 2001) to model the transition patterns, which can be formulated as: p(yi|xi) = exp(WcW⊤ o hi + bc) Pyi−1yi exp(WcW⊤ o hi + bc) $$\mathbf{\Sigma}$$ , (7) where Wo ∈ R ![8_image_0.png](8_image_0.png) dh×|T |, Wc ∈ R*|T |×|T |*, and bc ∈ R*|T |* are training parameters to model the transition from yi−1 to yi. Figure 6: Suppose two probability distributions P and Q. ∀(x, y) ∈ *P, x* = 0, y ∼ U(0, 1); ∀(x, y) ∈ *Q, x* = θ, 0 ≤ θ ≤ 1, y ∼ U(0, 1). Wasserstein Distance As shown in Fig. 6, there is no overlap between P and Q when θ ̸= 0, and: $$KL(P||Q)=\sum_{x=0,y\sim U(0,1)}1\cdot log\frac{1}{0}=+\infty,$$ $$KL(Q||P)=\sum_{x=\theta,y\sim U(0,1)}1\cdot log\frac{1}{0}=+\infty,$$ $$JS(P,Q)=$$ $$\frac{1}{2}(\sum_{x=0,y\sim U(0,1)}1\cdot log\frac{1}{\frac{1}{2}}+\sum_{x=0,y\sim U(0,1)}1\cdot log\frac{1}{\frac{1}{2}})$$ $$=log2,$$ $$W(P,Q)=|\theta|,\tag{8}$$ $$\quad(8)$$ when θ = 0: KL(P||Q) = KL(Q||P) = JS(*P, Q*) = 0, W(*P, Q*) = 0 = |θ|, (9) where KL(·) gives infinity when two distributions are disjoint, and JS(·) is always a constant. And they are both equal to 0 when θ = 0, so they both have a sudden jump at θ = 0. While the Wasserstein distance provides a smooth measure, which contributes to stable gradient descents. Versatile Decoding Pseudocode ExtractBorders aims to obtain the border indices of the prediction, such as the borders of "中国 / 科学技术 / 大学" is [0, 2, 6, 8]. CutBorders is designed to filter out the candidates in C that cross the borders, such as "中 国科学技术大学" will be filtered out, and "科学" "技术" will be preserved. LinkBorders is designed to obtain all candidates in C that match one-skip or multi-skip borders, such as "中国科学技术大学" will be preserved for it skip two borders [2, 6]. \# extract borders of the segmented token_list def extract_borders(token_list): borders = set() for token in token_list: borders.add(token.start_offset) borders.add(token.end_offset+1) return borders \# find candidates that no-cross borders def cut_borders(token_list, borders): cut_borders = [] cross_border = False for token in token_list: cross_border = False for idx in range(token.start_offset+1, token.end_offset+1): if idx in borders: cross_border = True break if not cross_border: cut_borders.append(token) return cut_borders \# find all candidates in token_list that match one-skip or multi-skip borders def link_borders(token_list, borders): link_borders = [] for token in token_list: if token.start_offset in borders and (token.end_offset+1) in borders: link_borders.append(token) return link_borders
kale-etal-2023-knowledge
{``}Knowledge is Power{''}: Constructing Knowledge Graph of Abdominal Organs and Using Them for Automatic Radiology Report Generation
https://aclanthology.org/2023.acl-industry.2
In conventional radiology practice, the radiologist dictates the diagnosis to the transcriptionist, who then prepares a preliminary formatted report referring to the notes, after which the radiologist reviews the report, corrects the errors, and signs off. This workflow is prone to delay and error. In this paper, we report our work on automatic radiology report generation from radiologists{'} dictation, which is in collaboration with a startup about to become Unicorn. A major contribution of our work is the set of knowledge graphs (KGs) of ten abdominal organs- Liver, Kidney, Gallbladder, Uterus, Urinary bladder, Ovary, Pancreas, Prostate, Biliary Tree, and Bowel. Our method for constructing these KGs relies on extracting entity1-relation-entity2 triplets from a large collection (about 10,000) of free-text radiology reports. The quality and coverage of the KGs are verified by two experienced radiologists (practicing for the last 30 years and 8 years, respectively). The dictation of the radiologist is automatically converted to what is called a pathological description which is the clinical description of the findings of the radiologist during ultrasonography (USG). Our knowledge-enhanced deep learning model improves the reported BLEU-3, ROUGE-L, METEOR, and CIDEr scores of the pathological description generation by 2{\%}, 4{\%}, 2{\%} and 2{\%} respectively. To the best of our knowledge, this is the first attempt at representing the abdominal organs in the form of knowledge graphs and utilising these graphs for the automatic generation of USG reports. A Minimum Viable Product (MVP) has been made available to the beta users, i.e., radiologists of reputed hospitals, for testing and evaluation. Our solution guarantees report generation within 30 seconds of running a scan.
# "Knowledge Is Power": Constructing Knowledge Graph Of Abdominal Organs And Using Them For Automatic Radiology Report Generation Kaveri Kale1, Pushpak Bhattacharyya1, Aditya Shetty2, **Milind Gune**3, Kush Shrivastava3, **Rustom Lawyer**3and **Spriha Biswas**3 1Department of Computer Science and Engineering, IIT Bombay, India 2Breach Candy Hospital, India 3Augnito India Private Limited {kaverikale,pb}@cse.iitb.ac.in, adityashetty01@gmail.com, dgune@rediffmail.com, {kush.shrivastava, rustom, spriha}@augnito.ai ## Abstract In conventional radiology practice, the radiologist dictates the diagnosis to the transcriptionist, who then prepares a preliminary formatted report referring to the notes, after which the radiologist reviews the report, corrects the errors, and signs off. This workflow is prone to delay and error. In this paper, we report our work on automatic radiology report generation from radiologists' dictation, which is in collaboration with a startup about to become Unicorn. A major contribution of our work is the set of knowledge graphs (KGs) of ten abdominal organs- Liver, Kidney, Gallbladder, *Uterus*, Urinary bladder, Ovary, Pancreas, *Prostate*, Biliary Tree, and *Bowel*. Our method for constructing these KGs relies on extracting entity1relation-entity2 triplets from a large collection (about 10,000) of free-text radiology reports. The quality and coverage of the KGs are verified by two experienced radiologists (practicing for the last 30 years and 8 years, respectively). The dictation of the radiologist is automatically converted to what is called a *pathological description* which is the clinical description of the findings of the radiologist during ultrasonography (USG). Our knowledge-enhanced deep learning model improves the reported BLEU-3, ROUGE-L, METEOR, and CIDEr scores of the pathological description generation by 2%, 4%, 2% and 2% respectively. To the best of our knowledge, this is the first attempt at representing the abdominal organs in the form of knowledge graphs and utilising these graphs for the automatic generation of USG reports. A Minimum Viable Product (MVP) has been made available to the beta users, *i.e.,* radiologists of reputed hospitals, for testing and evaluation. Our solution guarantees report generation within 30 seconds of running a scan. ## 1 Introduction Radiology is an integral part of medical care. Radiological imaging-based evidence (X-ray, MRI, CT, USG, *etc.*) is crucial for determining the nature of the treatment. The usual radiology process is as follows: A patient gets scanned. Then the radiologist prepares the diagnosis notes (referred to as **radiologist's dictation**) and handing them over to a transcriptionist. The transcriptionist opens a scan-specific standardised template (referred to as normal report template) and edits it refering to the notes in a more descriptive form (referred to as pathological description). Radiologists are in huge demand since the ratio of radiologists to patients is very low. These ratios in India, the US, and China are 1:100,000, 1:10000, and 1:14772, respectively (Arora, 2014). These low ratios results in a very high patient inflow per radiologist volume, making radiologists incredibly busy and stressed out. The currently adopted transcriptionist-based workflow causes (i) significant delays in report turnaround time, (ii) errors in the reports, and (iii) burnout. To automate the report generation process, domain knowledge is necessary. Domain knowledge can be acquired from already existing radiology free-text reports. We need a structured format for knowledge to be able to use it on a computer. Our research aims to use Natural Language Processing (NLP) (a) to construct abdominal-organ KG and (b) use these KGs for automatically generating radiology reports. Our work is in collaboration with a industry partner. On this project, two experienced radiologists are contributing their domain expertise to our work. Our contributions are: 1. Knowledge graphs of ten abdominal organs-Liver, Kidney, Gallbladder, Uterus, Urinary bladder, Ovary, Pancreas, Prostate, Biliary Tree, and *Bowel*. We will release these constructed KGs and the code for KG construction from free-text reports, for wide use. 2. A radiology dictionary containing 43,304 en11 tries that are adapted from the Radlex-lexicon1 and enriched with terminology from all forms of scans, *viz.*, USG, CT, MRI, and X-ray. 3. A generic methodology2to construct KGs from reports of all kinds of scans, *viz.*, CT, MRI, and X-ray. 4. A fine-tuned KG-BART that is fine-tuned on a parallel corpus of dictations and corresponding pathological descriptions. 5. A radiology report generation pipeline that identifies in a "normal" report the candidate text span for replacement and the patientspecific text that will replace the span. ## 2 Fundamental Definitions Paulheim (2017) defines Knowledge Graph (KG) as "A knowledge graph (i) mainly describes realworld entities and their interrelations, organized in a graph, (ii) defines possible classes and relations of entities in a schema, (iii) allows for potentially interrelating arbitrary entities with each other and (iv) covers various topical domains." KGs are designed with suitable ontology to store domain knowledge. Ontologies are semantic data models that define the types of things in a specific domain and the properties used to describe those types. Ontologies does not include the details about specific **individuals** in domain. Three main components of ontology are Classes, **Relationships**, and **Attributes**. Domain ontology and individual information together form a Knowledge Base (KB). We have defined eight logical relations as follows: 1. **PartOf:** It represents the relation between anatomy and sub-anatomy. For example, right lobe is part of liver. 2. **TypeOf:** It represents the relation between similar type of entities. For example, *cystic lesion* is TypeOf *lesion*. 3. **ModifierOf:** It denotes the descriptors of findings, anatomical locations, property, *etc.* For example, *small* is descriptor of *size*. 4. **ObservationOf:** It denotes the clinical observations observed for particular finding. For example, *acute pancreatitis* denotes the presence of *fluid collection*. 5. **DefaultObservationOf:** It denotes the observation that associated by default with particular 1http://radlex.org/ RadLex is licenced freely for commercial and non-commercial use. 2Our code to construct radiology KGs is located at https: //github.com/kaverikale/RadiologyKGConstruction. anatomical location or particular finding. For example, *peripancreatic fluid* is observation associated with *acute pancreatitis* by default. 6. **PropertyOf:** It denotes the relation between entities (anatomical entities, finding entities, observation entities, *etc.*) and their properties. For example, *echotexture* is property of the liver, size is the property of lesion, shape is the property of kidney *etc.* 7. **DefaultPropertyOf:** It denotes the property that exist by default with particular anatomical location or particular finding. For example, shrunken size is the property associated with chronic pancreatitis. 8. **FoundIn:** It denotes the relation between findings and corresponding anatomical location. For example, *lesion* found in *segment ii*. ## 3 Related Work Research is done for automatic radiology report generation based on scanned images. Yuan et al. (Yuan et al., 2019) propose an automated structured-radiology report generation system using extracted features from images. Loveymi et al. (Loveymi et al., 2021) proposed a system that generates descriptions for natural images by image captioning. There is a wealth of research done on building medical KG from Electronic Medical Records (EMR). Finlayson et al. (Finlayson et al., 2014) builds a graph from medical text, clinical notes etc. Graph nodes represents diseases, drugs, procedures, and devices. Rotmensch et al. (Rotmensch et al., 2017) uses the EMR to construct the graph of diseases and symptoms. Researchers worked on creating medical KG from EMR, but no one has built a KG for the radiology domain except Zhang et al. (Zhang et al., 2020). Graph embedding module is proposed by Zhang et al. (Zhang et al., 2020) that helps to generate radiology reports from image reports. Each node in their KG represents disease. Taira et al. (Taira et al., 2001) developed an NLP pipeline to structure critical medical information. Extracted information includes the existence, location, properties, and diagnostic interpretation of findings from radiology free-text documents. Information is not integrated since they store the structured information for each report separately. Also, this system does not accept reports with different reporting styles. However, this is not always the case. Every radiologist has his or her own dictation style and reporting style. IE systems that are based on IE patterns are surveyed by Muslea et al. (Muslea et al., 1999). Ghoulam et al. (Ghoulam et al., 2015) extract signs of lung cancer, their anatomical location, and the relation between the signs and the locations expressed in the radiology reports. Embarek & Ferret (Embarek and Ferret, 2008) used a morphosyntactic patterns in their rule-based method to find medical entities like symptoms, disease, exams, medicaments, and treatment. Xu et al. (Xu et al., 2009) explains that a pattern is a sub dependency tree that indicates a relation instance. Pons et al. (Pons et al., 2016) give an overview of NLP techniques that can be used in radiology. ## 4 Methodology As shown in figure 1, we first construct the KGs for each abdominal organ from the ultrasound report corpus. Then we use these constructed KGs to generate ultrasound radiology reports from the radiologist's dictation. KGs are constructed from anonymized radiology reports provided by our company collaborator. The anonymized report collection consists of approximately 10,000 reports of ultrasound scans. ![2_image_1.png](2_image_1.png) ![2_image_0.png](2_image_0.png) ## 4.1 Ontology Creation We refer to the RadLex lexicon to create our ontology, which we call "Radiology Ontology" (Langlotz, 2006). Though called a "lexicon," the RadLex is actually an ontology since it has a hierarchical structure. For example, "Solid Organ ← Lobular Organ ← Liver" is a part of the RadLex term and concept hierarchy. We will use "RadlLex Lexicon" to mean RadLex Ontology. The RadLex lexicon includes a total of 46,761 classes and 24,075 individuals. However, there are limitations in the structure. Classes are defined at a very fine-grain level. In our work, we do not need such fine granularity. For example, *liver* is defined as one of the classes in RadLex. Instead of treating it as a separate class, we can define it as an instance of the anatomy class. We have created our own ontology by integrating several RadLex Lexicon class entities. We maintain a coarse level of granularity to ensure a generic ontology. We have defined 8 logical relations as follows: PartOf, TypeOf, ModifierOf, ObservationOf, DefaultObservationOf, PropertyOf, DefaultPropertyOf, and *FoundIn*. Definitions and examples of all these logical relations are given in **appendix** 2. Figure 2 shows the class hierarchy of radiology ontology. ## 4.2 Radiology Dictionary Creator: Radiology reports contain a large number of medical terms like abbreviations, synonyms, and proper names. The RadLex lexicon is used to create the radiology dictionary. However, we cannot use radiology terms from the RadLex lexicon as it is because the RadLex lexicon contains long phrases, e.g., fat homogeneous background echotexture which are not at the right level of granularity for a knowledge graph. Also, there are some radiological terms that frequently appear in radiology reports but are not present in the RadLex lexicon (e.g., *reflectivity* and *echopattern*). Entities that are missing in the Radlex lexicon have been added from the corpus. Table 1 shows the examples of entities and categories in our dictionary. | Entity | Category | | | |-----------------------|--------------------|--------|----------| | lesion | observation | | | | cirrhosis | pathologic-finding | | | | hepatitis | inflammation | | | | size | property | Entity | Category | | small | size-modifier | | | | left lobe | anatomy | | | | ankle fracture | injury | | | | chronic liver disease | disease | | | Table 1: Examples of entities and corresponding categories in our radiology dictionary. ## 4.3 Triplets Extraction The Triplets-Extraction module extracts entities and relations. For example, for the corpus sentence Right kidney is normal in size, shape, location and cortical echogenicity, *Cortical* and *echogenicity* are linked by the relation *ModifierOf*. Also *normal* is a modifier of *size, shape, location*, and *echogenicity*. However, the state-of-the-art Open Information Extraction tools like OpenIE3are not capable of extracting these relations from free-text (Etzioni et al., 2008). Examples of triplets extracted using OpenIE are given in the table 9 in **appendix** A.3. Our triplet extraction method combines the dictionary-match, rules and patterns to extract entities and relations. This methodology necessitates the use of (i) Word and Phrase Level Processor, (ii) Dependency Parser, and (iii) Semantic Analyser. ## 4.3.1 Word And Phrase Level Processor The input to this stage is a sentence from the corpus, and the output is the sentence-wise syntactic and semantic features of each word and phrase in the sentence. Features include POS tags, lemmas, supersenses, and the root of a noun chunk. Lexical Semantic Supersense Tagger: To extract the relation between two entities connected by a preposition, the machine should understand the meaning of that preposition. As shown in the figure 3, the intuition of the in preposition in the first sentence is **characteristic** and in the second sentence is **locus**. Supersenses (Schneider et al., 2015) help get disambiguated senses of these prepositions. We have integrated the pre-written code of a Lexical Semantic Supersense Tagger (LSR)4(Liu et al., 2020) to assign supersense tags to prepositions. Figure 3 shows the sentences that contain the prepositions and their corresponding tags. Figure 3: Lexical Semantic supersense Tagger tags all words with the supersense tags. Supersenses of prepositions are highlighted in red. We see in tagged with different supersenses. We have mapped preposition supersenses to their corresponding logical relations. Table 2 shows the examples of supersense classes and their corresponding logical relations. | Supersense | Relation | | | |--------------|------------|------------|----------| | Locus | FoundIn | | | | Gestalt | PartOf | | | | PartPortion | PartOf | Supersense | Relation | | Whole | PartOf | | | | Manner | PropertyOf | | | | Purpose | PropertyOf | | | Table 2: Examples of supersenses and their corresponding logical relations. Noun phrases are chunked to get the candidate entity phrases. Table 3 shows the combined output of the supersense tagger and the noun phrase chunker. ## 4.3.2 Dependency Parser The dependency parser links the entity phrases directly or indirectly. We write rules based on dependency and POS-tags to extract the relations between entities. Spacy5 APIs are used for dependency parsing. Dependencies are established between phrases instead of words. An example of a dependency tree is given in figure 4. ## 4.3.3 Entities And Relations Extractor Dictionary-Matching-Based Entity Extractor: The noun chunker gives us noun phrases that are candidate entity phrases. However, not all noun phrases are radiological entities. Hence, to extract proper entities from noun phrases, we search the dictionary for matching entities. If a word or phrase matches multiple dictionary entries through more than one text span, we consider the longest text 5https://spacy.io/api/dependencyparser | Noun Phrases/Words | Token List | Root Token | POS Tags | Lemmas | Supersences | | | |----------------------|---------------|--------------|---------------|---------------|------------------|----|----| | Non-enhancing hypodense lesion | [Non, -, enhancing, hypodense, lesion] | lesion | [ADJ, | ADJ, | VERB, | [non, , enhance, hypodense, lesion] | [B-ADV, I_, I ADJ, OADJ, COGNITION] | | ADJ, NOUN] | | | | | | | | | noted | [noted] | noted | [VERB] | [note] | [cognition] | | | | in | [in] | in | [ADP] | [in] | [Locus] | | | | right lobe | [right, lobe] | lobe | [ADJ, NOUN] | [right, lobe] | [OADJ, LOCATION] | | | | of | [of] | of | [ADP] | [of] | [Whole] | | | | liver. | [liver, .] | liver | [NOUN, PUNCT] | [liver] | [BODY, OPUNCT ] | | | Table 3: Output of the word and phrase level processing for the input *Non-enhancing hypodense lesion noted in* ![4_image_0.png](4_image_0.png) right lobe of liver. span as the matched entry for entity extraction. For example, in the phrase *right lobe,* although the individual terms *right* and *lobe* exist in our dictionaries, only the longest match, *right lobe,* is used for entity extraction. Pattern-based Relation Extractor: A single noun phrase contains multiple entities. Table 4 shows the patterns to extract these entities. For example, consider the noun phrase non-enhancing hypodense lesion. As *non-enhancing* present in the dictionary, it applies the pattern *Modifier Observation* and extracts the triplet (non-enhancing, ModifierOf, lesion). *Hypodense* does not exist in our dictionary; hence, it applies the *ADJ NOUN* pattern and extracts a triplet (hypodense, *ModifierOf*, lesion). Relation Extraction Using Preposition Supersenses: If two entities are connected with the preposition, then we consider its supersense to find the relation. For example, *lesion in right lobe*, here in represents the **locus** supersense and as shown in the table 2, the **locus** is mapped to the *FoundIn* relation. We add a new triplet ( lesion, *FoundIn*, right lobe). ## Relation Extraction Between Different Noun Phrases: We have discussed how to extract entities and relations between the entities present in the single noun phrase. However, a relation exists between the entities present in the two different noun phrases. In the example shown in the figure 4, there exists a relation between the *lesion* and *right* lobe. We have written rules over the dependency tree to get the candidate pair of noun phrases. A list of patterns used to extract relations between two entities is listed in the table 5. ## 4.3.4 Evaluation: Triplets Extraction Module For each extracted triplet in a sentence, domain experts manually check whether it is correct or not. We calculate precision and recall for each sentence, then calculate the average precision, recall, and F1- Score. Figure 6 shows the evaluation results of our IE and OpenIE systems. ## 4.4 From Extracted Triplets To The Kg Domain experts create preliminary KGs for each organ containing higher-level entities (i.e., basic hierarchical anatomy, merely 2-3 levels.), keeping the organ name as the root node, e.g., *liver* for the Liver KG. Figure 8 shows the liver preliminary KG. We enhance preliminary KGs by adding extracted triplets to them. The file contains sentencewise triplets. KG augmentation algorithm steps are below: i) Create a hierarchical graph representation for each sentence's triplets. ii) Find matched pathways in our preliminary KG for each sentence graph path. iii) Add nodes and arcs that are missing in static KG. Figure 5 shows the augmented KG of the liver. We build ontology using the Protégé6(Musen, 2015) the well-known terminology and ontology building and maintenance tool. Using transformation rules7 we load augmented KG triplets as indi6https://protege.stanford.edu/ 7https://github.com/protegeproject/ cellfie-plugin | Pattern | Triplet Format | Example | Triplets | | | |---------------------------|------------------------------------------|----------------------|------------------------------------|-------------|----| | ADJ* NOUN/root | (ADJ, ModifierOf, NOUN/root) | simple clear cyst | (simple, ModifierOf, cyst) (clear, | | | | Anatomy Anatomy/root | (Anatomy/root, PartOf, Anatomy) | liver right lobe | (right lobe, PartOf, liver) | | | | Anatomy Finding/root | (Finding/root, FoundIn, Anatomy) | kidney calculus | (calculus, FoundIn, liver) | | | | Anatomy Observation/root | (Observation/root, FoundIn, Anatomy) | urinary bladder cyst | (cyst, FoundIn, urinary bladder) | | | | Modifier Observation/root | (Modifier, ModifierOf, Observation/root) | non-enhancing lesion | (non-enhancing, | ModifierOf, | le | | sion) | | | | | | Table 4: The list of some patterns and examples of triplets extracted from noun phrases when patterns are applied to extract relations. /root represents the root entity of a noun phrase. Table 5: The list of some patterns and examples of triplets extracted when patterns are applied to extract relations ![5_image_0.png](5_image_0.png) between two entities. | Pattern | (entity1-category, | Triplet Format | Example (entity1, entity2) | Triplets | |--------------------------------------|--------------------------------|-------------------------|-------------------------------------|------------| | entity2-category) (Anatomy, Anatomy) | (entity1, PartOf, entity2) | (right lobe, liver) | (right lobe, PartOf, liver) | | | (Property, Anatomy) | (entity1, PropertyOf, entity2) | (echotexture, pancreas) | (echotexture, PropertyOf, pancreas) | | | (Finding, Anatomy) | (entity1, FoundIn, entity2) | (medical renal disease, kidney) | (medical renal disease, FoundIn, | | | (Observation, Anatomy) | (entity1, ObservedIn, entity2) | (pseudo cyst, body) | (pseudo cyst, ObservedIn, body) | | Precision Recall F1-Score Our System **0.93 0.92 0.92** OpenIE 0.57 0.60 0.58 Table 6: The precision, recall, and F-Score for triplets extracted by our system and OpenIE tool. viduals/instances in the Protégé tool. The method is explained in detail in **appendix** A.2. ## 4.5 Radiology Report Generation In 3 Stages Generate Pathological Description: We finetune the KG-BART (Liu et al., 2021) model to generate the pathological description from the dictation. KG-BART uses the constructed KGs of the abdominal organs to get the domain knowledge for generating the pathological description. The fine-tuning dataset is the "parallel corpora," with impressions on one side and *pathological descriptions on the other*. This parallel corpus is created from the same dataset that was used to construct abdominals KGs. The parallel corpus is verified and corrected by two radiologists. Samples from the parallel corpus are given in the table 10 in **appendix** A.4. Span Identification: The span identification module identifies the span from the normal report template that would be replaced with a generated pathological description. Normal report templates are fairly standardised and templated and use "fixed" kinds of sentences. We give labels to these sentences. For example, we label normal sentence Liver is normal in size and echotexture as *liver1*. Figure 6 shows the normal report template with a label for each sentence mentioned in brackets. We create a dataset of pathological description sentences, each of which is annotated with corresponding normal sentence labels (i.e., liver1, liver2, *etc.*). The span identification problem can now be for- Figure 6: Ultrasound normal report template with a ![6_image_0.png](6_image_0.png) unique label (highlighted in red) for each sentence. mulated as a multilabel text classification problem. Given the pathological description, we need to decide which amongst the normal report sentences the description targets for replacement. A BERT-based multilabel text classifier takes the pathological description as input and gives multiple labels for the pathological description. Further implementation details are given in **appendix** A.5. Figure 7 shows the examples of pathological descriptions and their corresponding normal sentences to replace. ![6_image_1.png](6_image_1.png) Replacement: The candidate sentences returned by the BERT classifier are replaced by the pathological description in the normal report. If there are multiple candidates, replace the first sentence only and remove the other candidates. As shown in the second example of the figure 7, for a single sentence pathological description, there are two normal candidate sentences to replace. In that case, we replace the first normal sentence and remove the second normal sentence. ## 5 Experiments (Details about the training setup and implementation are in **appendix** A.4). Baseline and Evaluation: We compare our finetuned KG-BART model with T5-base/large (Raffel et al., 2020) and BART-base/large (Lewis et al., 2019) state-of-the-art pre-trained text generation models. Gold standard pathological descriptions extracted from reports and verified by radiologists are used. Table 7 shows the BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and METEOR (Banerjee and Lavie, 2005) scores of generated pathological descriptions by KG-BART, T5-base/large and BART-base/large models. | Method | Automatic Evaluation Metrics | | | | | |------------|--------------------------------|---------|--------|-------|-------| | Bleu-1 | Bleu-3 | Rouge-L | Meteor | CIDEr | | | T5-large | 0.873 | 0.780 | 0.897 | 0.902 | 0.892 | | BART-large | 0.887 | 0.798 | 0.910 | 0.916 | 0.908 | | KG-BART | 0.901 | 0.830 | 0.930 | 0.927 | 0.928 | ## 6 **Summary, Conclusion And Future Work** We have given a systematic method to construct organ-wise KGs from free-text radiology reports. The KGs are stored in standard RDF format, enabling their application to various medical applications. One such example is the generation of radiology reports, which we have described here. Our KG-enhanced deep learning model improves the reported BLEU-3, ROUGE-L, METEOR, and CIDEr scores of the pathological description generation by 2%, 4%, 2% and 2% respectively. Our approach is generalized for other organs and scanned procedures, as evidenced in the EACL paper8. To the best of our knowledge, this is the first attempt at automatic ultrasound report generation. An MVP (minimum viable product) has been made available to the beta users (practicing radiologists) for testing and evaluation. We are continuously collecting feedback on our system from radiologists and continually refining the tool. We have observed that it can generate a report within 30 seconds of running a scan. 8https://aclanthology.org/2023.eacl-main.246/ ## Ethics Statement Anonymized radiology reports are used to build KGs. The data itself is anonymized; hence, our system does not reveal any patient-specific identity. ML technologies for this kind of work have the potential to gradually become the norm but will always remain as assistive tools for medical practitioners. Hence, while ML technologies of this kind often veer towards the norm, the envisioned assistive nature of this technology, where humans will always have oversight, will address this issue. We have evaluated the outputs of the KG-BART model using automated metrics, but to contextualize the results, a human evaluation metric would have been useful; however, we left this work for the future. An MVP (minimum viable product) has been made available to the beta users (practicing radiologists) for testing and evaluation. Manual evaluation will be done by considering beta users feedback. ## Acknowledgements References Richa Arora. 2014. The training and practice of radiology in india: current trends. Quantitative imaging in medicine and surgery, 4(6):449. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In *Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization*, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Mehdi Embarek and Olivier Ferret. 2008. Learning patterns for building resources about semantic relations in the medical domain. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. *Communications of the ACM*, 51(12):68–74. Samuel G Finlayson, Paea LePendu, and Nigam H Shah. 2014. Building the graph of medicine from millions of clinical narratives. *Scientific data*, 1(1):1–9. Aicha Ghoulam, Fatiha Barigou, and Ghalem Belalem. 2015. Information extraction in the medical domain. Journal of Information Technology Research (JITR), 8(2):1–15. Curtis P Langlotz. 2006. Radlex: a new method for indexing online educational materials. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Nelson F Liu, Daniel Hershcovich, Michael Kranzlein, and Nathan Schneider. 2020. Lexical semantic recognition. *arXiv preprint arXiv:2004.15008*. Ye Liu, Yao Wan, Lifang He, Hao Peng, and S Yu Philip. 2021. Kg-bart: Knowledge graph-augmented bart for generative commonsense reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 6418–6425. Samira Loveymi, Mir Hossein Dezfoulian, and Muharram Mansoorizadeh. 2021. Automatic generation of structured radiology reports for volumetric computed tomography images using question-specific deep feature extraction and learning. Journal of Medical Signals and Sensors, 11(3):194. Mark A Musen. 2015. The protégé project: a look back and a look forward. *AI matters*, 1(4):4–12. Ion Muslea et al. 1999. Extraction patterns for information extraction tasks: A survey. In The AAAI-99 workshop on machine learning for information extraction, volume 2. Orlando Florida. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. *Semantic web*, 8(3):489–508. Ewoud Pons, Loes MM Braun, MG Myriam Hunink, and Jan A Kors. 2016. Natural language processing in radiology: a systematic review. *Radiology*, 279(2):329–343. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Maya Rotmensch, Yoni Halpern, Abdulhakim Tlimat, Steven Horng, and David Sontag. 2017. Learning a health knowledge graph from electronic medical records. *Scientific reports*, 7(1):1–11. Nathan Schneider, Vivek Srikumar, Jena D Hwang, and Martha Palmer. 2015. A hierarchy with, of, and for preposition supersenses. In Proceedings of The 9th Linguistic Annotation Workshop, pages 112–123. Ricky K Taira, Stephen G Soderland, and Rex M Jakobovits. 2001. Automatic structuring of radiology free-text reports. *Radiographics*, 21(1):237–245. Hongzhi Xu, Changjian Hu, and Guoyang Shen. 2009. Discovery of dependency tree patterns for relation extraction. In Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2, pages 851–858. Jianbo Yuan, Haofu Liao, Rui Luo, and Jiebo Luo. 2019. Automatic radiology report generation based on multi-view image fusion and medical concept enrichment. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 721–729. Springer. Yixiao Zhang, Xiaosong Wang, Ziyue Xu, Qihang Yu, Alan Yuille, and Daguang Xu. 2020. When radiology report generation meets knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12910–12917. ## A.1.1 Spelling Correction A Appendices A.1 Data Preprocessing Multi-Word Spelling Correction A.2 Knowledge Graph Augmentation veins are normal. The text is then further divided into sentences. In corpus, there are a lot of spelling mistakes also. To correct the spellings we have used the SymSpell library. Single Word Spelling Correction We have created unigram and bigram dictionaries for corpus text. Unigram Dictionary: Dictionary of unique correct spelling words, and the frequency count for each word. Bigram Dictionary: Dictionary of the unique correct spelling of a pair of words, and the frequency count for each pair. Levenshtein algorithm is used to compute edit distance metric between two strings. Edit distance algorithm finds the correct suggestion for words in input text with words in unigram dictionary. For example, *enlaregd, billiary, radicals* are the incorrect words found in the corpus. In dictionary enlarged, biliary, radicals these correct words are present. Edit distance algorithm suggests enlarged word for *enlaregd*. Similarly *biliary* for *billiary* and *radicles* for *radicals*. To construct KGs, we use the information from the free-text radiology reports. The data preprocessor module takes radiology reports as input. Radiology reports contain Header, Findings, History, and Conclusion/Impression sections. We use simple heuristics like regular expressions to fetch only the Findings and Impression section. Furthermore, we use regular expressions to separate the organwise sentences in different datasets. We use Symspell9 APIs to correct spelling mistakes, and wordtokenization since the extracted sentences contain spelling mistakes, unwanted punctuation marks, etc. Table 8 shows the samples from the corpus, which we extract from sample reports. In the corpus, there are a lot of extra spaces and unwanted punctuation marks found. We have removed these unwanted characters from the corpus using regular expressions. For example, *Liver is enlarged in size(16.* 45cm)& normal in shape and shows raised echo reflectivity. No focal or diffuse lesion is seen. The portal and hepatic veins are normal. In the above example, there is no space between *size, (16.45cm)* and &. Also, there is no space between . and No and therefore sentence tokenization is challenging. Liver is enlarged in size ( 16.45 cm ) & normal in shape and shows raised echo reflectivity. No focal or diffuse lesion is seen. The portal and hepatic 9https://github.com/wolfgarbe/symspell - We remove mistakenly inserted spaces within a correct word Input: Liver is normal in size and reveals diffuse hypo attenuation Output: Liver is normal in size and reveals diffuse hypoattenuation - We add mistakenly omitted spaces between two correct words Input: Liver appears normal in size and reveals mild generalized increasedparenchymal reflectivity. Output: Liver appears normal in size and reveals mild generalized increased parenchymal reflectivity. Table 8 shows the organwise samples from text corpus after data preprocessing. Preliminary KGs enhanced using triplets extracted by IE module. Steps involved in KG augmentation are explained below: | Samples from organ-wise corpus | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------| | Liver the liver is normal in size and moderate diffuse increase in hepatic echogenicity. no focal lesion is seen. intra-hepatic biliary radicals are not dilated. portal vein is normal. grade ii fatty liver. liver is normal in size and echotexture show no focal areas of altered echotexture or mass lesion. no intra-hepic biliary radicals dilatation seen. Portal vein appears normal. | Spleen spleen is normal in size and echotexture. spleen and Pancreas are normal in size and echotexture. no focal mass lesion noted. | Pancreas pancreas is normal in size and echotexture. spleen and Pancreas are normal in size and echotexture. No focal mass lesion noted. | | Kidney right kidney measures 10.2 cm in long axis. left kidney measures 10.3 cm in long axis. both kidneys are normal in size and echotexture. no evidence of hydronephrosis or calculi seen. both the Kidneys are normal in size and echotexture. corticomedullary differentiation well seen. no hydronephrosis or stones seen. right kidney measures about 11.2 x 4.0 cm. parenchymal thickness is 1.4 cms. left kidney measures about 10.5 x 4.5 cm. parenchymal thickness is 1.3 cms. | Gallbladder the gallbladder is adequately distended. gallbladder calculus noted measuring 7 mm with no pericholecystic fluid. wall thickness appears normal. the cbd is normal. gallstone with no pericholecystic fluid. gallbladder shows normal distention, no evidence of stones seen or wall. Urinary bladder urinary bladder is normal with no abnormal internal echoes and wall of normal thickness. urinary bladder appears normal. no stone, mass or wall thickening noted. | | | Uterus uterus is anteverted and normal in size and echotexture. the endometrial echo is midline and regular. endometrial thickness is dim mm. uterus is normal in size of normal echotexture. it measures about 6.7 x 3.1 x 4.5 cms in size. no focal mass seen. endometrial echoes are normal. | Ovary right ovary measures 1.2 x 2.3 cm. left ovary measures 2.2 x 1.2 cm. both the ovaries are normal. no adnexal mass noted. right ovary measures 2.1 x 1.4 cm. left ovary measures 2.0 x 1.3 cm. | Adnexa no adnexal abnormality seen. | - **Step 1:** Triplets are stored in the file against its input sentence. For example, sentence from corpus is, *A lesion of increased echotexture in* the right lobe of liver. Triplets extracted corresponding to above sentence are, (increased, ModifierOf, echotexture), (*echotexture, PropertyOf, lesion*), (*right lobe, PartOf, liver*), and (*lesion, FoundIn, right lobe*). - **Step 2:** Construct dynamic KG for sentence triplets. Figure 9 shows the dynamic KG constructed for sentence triplets. - **Step 3:** Find its appropriate matched path in our already built preliminary (static) KG. Figure 10 shows the entities from dynamic KG path matched with static KG path. - **Step 4:** If a triple is missing in the static KG, then we add a new triple in the static KG. Here in above example triple (*increased, ModifierOf, echotexture*) is missing in static KG. Hence, we will add this triple in static KG. Figure 11 shows the updated static KG. This is how we update the static KG according to our dynamic KG triplets. We repeat above steps for all sentences in our corpus. In a static KG, we have multiple instances of the same observations, same properties, and same modifiers. For example, *acute hepatitis* reveals decreased echogenicity of the *liver* and *chronic* liver disease reveals *increased echogenicity* of the liver. Here for both the findings *echogenicity* is the related observation but their related descriptors/modifiers are not same. *Decreased* is the echogenicity modifier associated with *acute hepatitis* and *increased* is the *echogenicity* modifier associated with *chronic liver disease*. Therefore we have created different instances of observation with name *echogenicity* for both the findings. Hence, we use a path from the dynamic KG to find the appropriate entity with identical names from the static KG. In static KGs, we have arranged findings in such a way that its parents represent the anatomical location and its children represent the properties or states of organs related to that finding. Figure 5 shows the augmented KG of the Liver. ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) We rely on Protégé10 (Musen, 2015) the wellknown terminology and ontology building and maintenance tool. Extracted triplets are submitted in a CSV file along with classes of entities and relation types. The triplets are then ingested by Protégé to create the KG. We have created a rule-based system to find the class of each entity from our constructed dictionary. For example, given the triplet lesion-FoundIn-liver, the class-name for *lesion* is Finding and that for liver is *Anatomy*. The following are the stages for all triplets (*entity1, relation,* entity2): i) Protégé creates instances of corresponding classes for entity1 and entity2. ii) Protégé adds the relation between two entities. Figure 5 shows the KG of the liver. ![10_image_1.png](10_image_1.png) ## A.3 Datasets And Examples Table 9 shows the triplets extracted by OpenIE tool for given input sentences. As shown in the table 9, for the sentence 1, OpenIE could not find the relation between *calculus* and *middle calyx*, middle calyx and *right kidney* and for the sentence 2, OpenIE does not consider the shape, *location*, and *cortical echogenicity*. ## A.4 Training Details: Kg-Bart Table 10 shows the samples from the parallel (impression and findings) dataset. We have implemented our own algorithm for KG-grounding task. We use pre-trained KGBART11 model which was trained for commonsense reasoning on ConceptNet KG and commonsense dataset. We fine tune this model on radiology 11https://github.com/yeliu918/KG-BART ![11_image_0.png](11_image_0.png) | Input Sentence | Triplets Extracted Using OpenIE | | |------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----| | A 5 mm calculus is noted in an upper calyx and a 4 x 3 mm calculus is noted in a middle calyx of right kidney. | (mm calculus, is noted in, upper calyx) (mm calculus, is, noted) (5 mm calculus, is noted in, calyx) (5 mm calculus, is noted in, upper calyx) (5 mm calculus, is, noted) (mm calculus, is noted in, calyx) | | | Right | kidney | is | | normal in size 9.6 x 4.0 cm, shape, location and cortical echogenicity. | (kidney, is, normal) (right kidney, is, normal) (right kidney, is normal in, size) (kidney, is normal in, size) | | text dataset that we have constructed. We use bytepair encoding for tokenization with a maximum length of 32 for the encoder and 64 for the decoder. We set learning rate to 0.00001 and used AdamW with 1 = 0.9, 2 = 0.98 for optimization. We set the batch size to 32. We trained the KG-BART for 15 epochs, and the gradients are accumulated every 6 steps. We apply dropout with a probability 0.1 to avoid over-fitting. We use beam search with beam size 5 and length penalty with factor 0.6 while inferencing. The training time took 7 hrs on a single NVIDIA GeForce GTX 1080 Ti GPU with 11 GB GDDR5X memory. ## A.5 **Implementation Details Of Span Identifier** We use a BERT-based multilabel text classifier to identify the normal sentences. The last layer uses a sigmoid activation function to generate the probability of a sample belonging to the corresponding class. We used pretrained BERT weights to initialize our model. There are a total of 24 labels, according to the number of nodes in the last layer change. We train all the models on a DGX A100-SXM-80GB GPU server. For all transformer based models we use hugging face transformer libraries.12 A.6 Examples of Constructed KGs for Abdominal Organs A.7 Example of Normal Report and Patient Specific Report | Impression | Pathological Description | |------------------------------------------------------------------------|--------------------------------------------------------------------------| | liver is enlarged in size with normal echopattern a tiny anechoic thin | | | Hepatomegaly with a tiny clear cyst seen in right lobe of | | | liver measuring 3 x 5 mm. | walled cyst measuring 3 x 5 mm in right lobe of liver. | | Right renal hemorrhagic cyst at upper pole measuring 9.4 | A cortical cyst is noted at upper pole of right kidney measuring 9.4 x 8 | | mm showing internal mobile echoes. | | | x 8 mm. | | | Chronic pancreatitis with thick walled pseudo pancreatic | Pancreas is slightly small, reveals thin inhomogenous paranchyma. | | cyst measuring 8.6 x 6.4 cm vol 1.1 mm noted in region | the pancreatic duct is dilated measuring 1.1 mm. multiple intraductal | | of the tail of pancreas. | calculi seen. a thick walled 1.1 mm cyst measuring 8.6 x 6.4 cm vol | | 1.1 mm in the region of the tail of pancreas. | | ![12_image_0.png](12_image_0.png) Table 10: Samples from dataset constructed using radiology report corpus. ![12_image_1.png](12_image_1.png) ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png)
hashimoto-etal-2023-hunt
Hunt for Buried Treasures: Extracting Unclaimed Embodiments from Patent Specifications
https://aclanthology.org/2023.acl-industry.3
Patent applicants write patent specificationsthat describe embodiments of inventions. Some embodiments are claimed for a patent,while others may be unclaimeddue to strategic considerations. Unclaimed embodiments may be extracted byapplicants later and claimed incontinuing applications togain advantages over competitors. Despite being essential for corporate intellectual property (IP) strategies,unclaimed embodiment extraction is conducted manually,and little research has been conducted on its automation. This paper presents a novel task ofunclaimed embodiment extraction (UEE)and a novel dataset for the task. Our experiments with Transformer-based modelsdemonstratedthat the task was challenging as it requiredconducting natural language inference onpatent specifications, which consisted oftechnical, long, syntactically and semanticallyinvolved sentences. We release the dataset and code to foster this new area of research.
# Hunt For Buried Treasures: Extracting Unclaimed Embodiments From Patent Specifications Chikara Hashimoto∗ Gautam Kumar∗ Shuichiro Hashimoto∗ **Jun Suzuki**§ ∗Rakuten Institute of Technology, Rakuten Group, Inc. § Center for Data-driven Science and Artificial Intelligence, Tohoku University chikara.hashimoto@rakuten.com ## Abstract Patent applicants write patent specifications that describe embodiments of inventions. Some embodiments are claimed for a patent, while others may be unclaimed due to strategic considerations. Unclaimed embodiments may be extracted by applicants later and claimed in continuing applications to gain advantages over competitors. Despite being essential for corporate intellectual property (IP) strategies, unclaimed embodiment extraction is conducted manually, and little research has been conducted on its automation. This paper presents a novel task of unclaimed embodiment extraction (UEE) and a novel dataset for the task. Our experiments with Transformer-based models demonstrated that the task was challenging as it required conducting natural language inference on patent specifications, which consisted of technical, long, syntactically and semantically involved sentences. We release the dataset and code to foster this new area of research.1 ## 1 Introduction Patents provide inventors the right to exclude others from using their inventions in exchange for disclosing how to make and use inventions by writing patent specifications. Patents have thus incentivized innovation and benefited industries. Given the increasing number of patent applications even during the COVID-19 pandemic (WIPO, 2022b), it is important to streamline patent application processes with technologies. A patent specification describes an invention (Figure 1) by specifying one or more ways of embodying the invention, so that people skilled in the art can make and use it. A patent specification also contains claims that specify which embodiment applicants want to patent by stating the technical features necessary for the embodiment. Here, an invention refers to a mental construct inside the 1https://github.com/rakutentech/UEE_ ACL23 Patent Specification ![0_image_0.png](0_image_0.png) Claims Description ![0_image_1.png](0_image_1.png) mind of the inventor, while the *embodiment* of the invention is a physical form of the invention and claims protect the embodiments (WIPO, 2022a). A patent specification may describe a variety of embodiments, some of which may be unclaimed because claiming too diverse embodiments in a patent application may violate the *unity of invention*, a requirement for a patent application to relate to one invention only or to a group of inventions so linked as to form a single general inventive concept (USPTO, 2020b). *Continuing application* could be utilized later to claim those unclaimed embodiments in the prior patent application (the *parent* application). A continuing application can claim any embodiments if they are written in its parent's description. Moreover, the filing date of continuing application is the same as its parent's, even if it is filed years after the parent. Applicants can therefore utilize continuing applications strategically by, for instance, writing as many diverse embodiments as possible in the parent application and filing a continuing application to claim unclaimed embodiments in the parent. If the continuing application does not exhaust its parent's embodiments, applicants may have further continuing applications. In so doing, applicants can adapt the claims of continuing application to new products and services of their company, and even new products and services of their competitors, enhancing their industrial competitiveness. Continuing application requires extracting unclaimed embodiments from a patent specification. This is tedious as it requires understanding a wide variety of embodiments that are strategically arranged in the patent specification, a legal, technical document that may consist of thousands of tokens (Tab 1). Unclaimed Embodiment Extraction (UEE) has nonetheless been conducted manually without any technological support, and little research has been conducted on UEE. This paper introduces the novel task of UEE (Figure 1) and the first publicly available dataset for UEE. Besides its practical utility, UEE poses a new NLP challenge as it involves two decisions to make (§2), one of which, i.e. *decision (ii)*, requires matching embodiment text in the description with claims to see if the embodiment has been claimed. Decision (ii) can be seen as a real-world natural language inference (NLI) (Bowman et al., 2015), where the hypothesis is a description paragraph and the premise is a set of claims. Although there have been studies on NLI for real-world applications (Holzenberger et al., 2020; Koreeda and Manning, 2021), decision (ii) involves a novel real-world NLI due to the following challenge: The hypothesis and the premise may consist of multiple long sentences which are written in *patentese* and full of technical terms in the target domain and whose syntactic and semantic structures are hard to recognize for non-IP specialists (Ferraro et al., 2014). Although our UEE dataset has been created based on Japanese patents, extracting unclaimed embodiments from patent specifications is conducted in other countries such as the U.S. This paper gives examples in English for ease of explanation. See Appendices for Japanese examples. Our **contributions** are as follows: 1. We introduce UEE, a novel, real-world NLP challenge. 2. We create and release the first dataset for UEE. 3. We conducted UEE experiments to demonstrate its difficulty. ![1_image_0.png](1_image_0.png) 4. We release code for reproducibility. ## 2 Task Given a patent specification that comprises a set of claims and a set of description paragraphs about an invention, we want to determine whether each paragraph in the description describes any embodiment of the invention that has not been claimed in the claims (Figure 1). This involves two decisions: (i) Does a given paragraph describe an embodiment of the invention? (ii) Has the embodiment in a paragraph, if any, been claimed in the claims already? We thus need both a paragraph and a set of claims to determine whether the paragraph contains an unclaimed embodiment. Formally, the task is defined as follows (Figure 2). Suppose we have I patent specifications and the i-th patent specification has Ji description paragraphs. Let pi,ji be the ji-th description paragraph in the i-th specification, Ci be a set of claims in the i-th specification, and yi,ji be a binary label where yi,ji = 1 if pi,ji describes any embodiment that is not claimed in Ci and yi,ji = 0 otherwise; here, i = {1*, ..., I*} and ji = {1*, ..., J*i}. Given N =Pi Jitraining instances, our goal is to learn a function f(pi,ji , Ci) → yi,ji . The task involves the two decisions (i) and (ii) and a UEE model may make the two decisions separately. Our UEE baseline models in §4.1, nonetheless, make the two decisions in a single step, as it is more straightforward. We will explore different architectures for better utilization of the nature of the task (involving the two decisions) in the future. In this study, we chose a paragraph as the unit of embodiment description, because in the patent applications, paragraphs are encouraged to be numbered to serve as the unit of work and indeed form a ![2_image_0.png](2_image_0.png) $$\begin{array}{l}{{\texttt{p o.g o.j p/s y s t e m/l a w}}}\\ {{\texttt{l o a d.h t m l(J a p a n e s)}}}\end{array}$$ meaningful unit. Other options would be a phrase, clause, or sentence. We will identify the best unit of embodiment description in the future. ## 3 Dataset 3.1 Data Source We acquired source patent data from the Japan Patent Office (JPO) via their web form.2 The data from JPO contained Japanese patent specifications from 1993 to 2022. We obtained both parent and continuing patent specifications from this data. We created the dataset from patent specifications that had their corresponding continuing patents. ## 3.2 Annotation Method As the task involves two decisions, (i) and (ii) in §2, our annotation method is based on the two as illustrated in Figure 3. Specifically, we label a paragraph as negative if it has no embodiment (See paragraph 0008 in Figure 3), or if the embodiment is claimed in the parent patent to which the paragraph belongs (0004 in Figure 3). For positive annotation, we used the continuing patent generated from the patent to which the target paragraph belonged. If a paragraph describes an embodiment that is claimed in the continuing patent but not in the parent, the paragraph is labeled as positive (0034 in Figure 3). Although we can identify unclaimed embodiments from the parent patent, without relying on the continuing patent, it helps us double-check positive paragraphs. To use continuing patents, we collected patent specifications with corresponding continuing patents from the JPO data and made pairs of parent and continuing patents as in Figure 3. 2https://www.jpo.go.jp/system/laws/ sesaku/data/download.html (Japanese) We restricted target patents to those with International Patent Classification (IPC) codes3that met our business needs; specifically, we mainly chose IPC codes for digital data processing (G06F), information and communication technology (G06Q), and aeroplanes (B64C). IPC is used in over 100 countries to indicate the subject of the invention.4 We conducted manual annotation on pairs of parent and continuing patents collected in this way. On top of positive and negative labels, we leave notes that clarify reasons for annotators' labeling decisions (e.g., CLAIM1 and NONE in Figure 3). This is because labeling decisions would be based on patent practitioners' expertise, which may be incomprehensible to researchers, and we expect the notes to improve annotated labels' interpretability. A negative paragraph is given the claim ID of the parent patent as the note if the paragraph's embodiment is claimed in the parent's claims. A negative paragraph is given the note NONE if it has no embodiment. A positive paragraph is given the claim ID of the continuing patent if its embodiment is claimed in the continuing patent's claims. We use the notes for experiments of decision (i) (§4.2) and decision (ii) (§4.3), too. ## 3.3 Annotators Two experienced patent practitioners who were native speakers of Japanese were employed as annotators. We split the 11,951 instances (each consisting of a description paragraph and a set of claims) into two separate sets. Each annotator was assigned to only one set; no instance was annotated by both of them due to our budget constraints. We nonetheless measured inter-annotator agree-3https://ipcpub.wipo.int/ 4https://en.wikipedia.org/wiki/ International_Patent_Classification | Total numbers | | |-------------------------------|----------------| | Labeled instances | 11,951 | | Parent patents | 971 | | Continuing patents | 1,022 | | Average numbers | | | Tokens per desc. paragraph | 88.73 (77.66) | | Sentences per desc. paragraph | 3.67 (1.96) | | Tokens per desc. sentence | 23.59 (21.61) | | Claims per parent patent | 11.37 (6.11) | | Tokens per claim | 106.44 (95.12) | | Label distribution | | | POSITIVE:CLAIM | 4,564 (38.19%) | | NEGATIVE:CLAIM | 1,619 (13.55%) | | NEGATIVE:NONE | 5,768 (48.26%) | Table 1: Statistics of the UEE dataset. In "Average numbers" the figures in parenthesis are standard deviations. The "desc." stands for "description." The "CLAIM" and "NONE" are the notes described in §3.2. ment by asking another experienced patent practitioner (a native speaker of Japanese) to annotate 309 instances that the above two annotators worked on after removing their labels and notes. As a result, Cohen's kappa (Cohen, 1960) was 0.465, indicating moderate agreement (Landis and Koch, 1977). According to the kappa score, experts may disagree occasionally. The question is then how much experts' disagreement affects the performance, as pointed out by an anonymous reviewer. We will explore this question in the future. ## 3.4 The Dataset The resulting UEE dataset has 11,951 instances. We use 60% for training, 20% for development, and 20% for testing. Table 1 shows the statistics of the dataset. We used the tokenizer of our RoBERTa in §4.1 to count tokens. In "Label distribution" in Table 1, "**POSITIVE**:CLAIM" 5refers to a positive instance. The instance with paragraph 0034 in Figure 3 has this label. "**NEGATIVE**:CLAIM" means a negative instance with an embodiment that has already been claimed in the parent patent and hence with a note of the corresponding claim ID. "**NEGATIVE**:NONE" is a negative instance without embodiment. The instances with paragraphs 0004 and 0008 in Figure 3 are examples of these two types of negatives, respectively. In Appendix A, we show an example data instance of the UEE dataset in Japanese and English 5We may omit claim IDs of the notes, e.g. "1", hereafter. Experiment Model F1 UEE Baselines RoBERTa **0.8670** (0.0034) Longformer 0.7247 (0.0335) Decision (i) RoBERTa 0.9259 (0.0057) Decision (ii) RoBuee **0.7218** (0.0110) RoB*jsnli* A 0.4029 (0.1434) RoB*jsnli* B 0.4384 (0.2472) ## For Illustration Purposes. 4 Experiments We conducted experiments of UEE with baseline models based on Transformer (Vaswani et al., 2017) to see the difficulty of the task (§4.1). We also conducted experiments of making the decisions (i) and (ii) in §2 as independent tasks for a better understanding of the task (§4.2 and §4.3). These experimental results show the following: (A) Baseline Transformer-based models deliver mediocre performances (§4.1). (B) Despite patent specifications' being long, UEE models do not necessarily have to deal with long documents (§4.1). (C) The bottleneck in UEE is decision (ii) (§4.3). ## 4.1 Uee Baselines We evaluated RoBERTa (Liu et al., 2019) and Longformer (Beltagy et al., 2020) for the UEE task, because these are ones of standard Transformer-based models, and Longformer is known to be able to deal with long documents such as patent specifications. However, we do not claim these are optimal models for the task; we will explore better models in the future. Our RoBERTa was built from a base-sized one which we call Rinna RoBERTa6and had been pre-trained on Japanese CC-100 (Conneau et al., 6https://huggingface.co/rinna/ japanese-roberta-base 2020) and Japanese Wikipedia. The maximum sequence length was 512. We fine-tuned Rinna RoBERTa on the UEE training set for ten epochs with the training batch size, the warm-up steps, and the learning rate being set to 128, 100, and 5e-5, respectively. We used AdamW (Loshchilov and Hutter, 2019) for optimization. We will describe the hyper-parameter settings of models we experimented with in Appendix B, hereafter. Our Longformer was converted from Rinna RoBERTa following Beltagy et al. (2020).7 See Appendix B.2 for its hyper-parameter setting. The first two rows in Tab 2 labeled with §4.1 show F1 of the two baseline models on the UEE test set. We fine-tuned and evaluated each model five times. The reported figures are the mean and standard deviation obtained from the five runs. The result indicates that our baselines have nonnegligible room for improvement. Given Transformers' successes in many tasks (e.g., a basesized RoBERTa fine-tuned and evaluated on JSNLI (Yoshikoshi et al., 2020), a Japanese NLI dataset, delivers the F1 of 0.93 (Yanaka and Mineshima, 2022)), we think that UEE is challenging. The result also indicates that RoBERTa outperforms Longformer. Actually, we expected the opposite result, because the input to UEE models, i.e. a pair of a description paragraph and a set of claims, tends to be long; the average number of tokens in a description paragraph is 88.73 and that of tokens in a set of claims is 1210.22 (= 106.44 × 11.37), as Table 1 shows. We suspect that this unexpected result is due to the fact that, in the UEE dataset, more than 70% of embodiments in description paragraphs with NEG-**ATIVE**:CLAIM are claimed in the first three claims. Models then do not always have to read through all the claims. This is probably because of the preferred order of claims: Claims should preferably be arranged in order of scope so that the first claim presented is the least restrictive (USPTO, 2020a); i.e. the most general claims should come first. ## 4.2 Decision (I) We conducted experiments of making only the decisions (i), i.e. whether a paragraph described any embodiment, to see how difficult it was. To train and test a model for decision (i), we created training, development, and test sets for decision (i) from the corresponding set of the UEE dataset as follows.8 We regarded the instances in the UEE dataset whose note is NONE as negative and the rest as positive, because note NONE indicates the corresponding paragraph has no embodiment as described in §3.2. The positive-negative ratio was then about 52:48 as Tab 1 indicates. We built a model from Rinna RoBERTa (§4.1) again for this experiment. The experimental protocol was the same as our RoBERTa in §4.1. See Appendix B.3 for its hyper-parameter setting. The third row in Tab 2 labeled with §4.2 shows F1 of the model on the test set. The reported figures are the mean and standard deviation obtained from five runs of fine-tuning and evaluation. This result indicates that decision (i) is a modest task. ## 4.3 Decision (Ii) We also conducted experiments to make the decision (ii), i.e. whether the embodiment of a paragraph has been claimed, as an independent task. As discussed in §1, decision (ii) can be seen as an NLI task where the hypothesis is a paragraph and the premise is a set of claims. For training and test of models for decision (ii), in order to focus on its NLI aspect, we ignored UEE dataset instances with **NEGATIVE**:NONE. This is because it is obvious for a paragraph without any embodiment to be unclaimed, i.e. not entailed by a set of claims. Besides, we used not only parent patents but also their continuing patents in the UEE dataset, as it is straightforward to use them for decision (ii). Accordingly, we created training, development, and test sets for decision (ii) from the corresponding set of the UEE dataset as follows. We generated positive instances for decision (ii) from the UEE dataset by pairing a paragraph of **POSITIVE**:CLAIM and a set of claims in the *continuing patent*; e.g. the pair of paragraph 0034 and the continuing patent's claims in Figure 3. We also generated decision (ii) positives by pairing a paragraph of NEGATIVE:CLAIM and a set of claims in the *parent patent*; e.g. paragraph 0004 and the parent patent's claims in Figure 3. Likewise, decision (ii) negatives were generated by pairing a paragraph of **POSITIVE**:CLAIM and a set of claims in the *parent patent* and also by pairing a paragraph of **NEGATIVE**:CLAIM and a set of claims in the *continuing patent*. In Figure 3, pair of 8For dataset creation for (i) and (ii), see: Decision1/ src/dataset.py and Decision2/src/dataset. py in github.com/rakutentech/UEE_ACL23. ![5_image_0.png](5_image_0.png) paragraph 0034 and the parent patent's claims and that of paragraph 0004 and the continuing patent's claims are decision (ii) negatives. The positive-negative ratio was then 50:50. We fine-tuned Rinna RoBERTa with this training set. We call the resulting model RoBERTauee. The experimental protocol was the same as the previous experiments (§4.1 and §4.2). Since decision (ii) is an NLI task, we also finetuned Rinna RoBERTa using the JSNLI dataset9 for comparison. We call it RoBERTa*jsnli*. We used the same test set as RoBERTauee for evaluation. RoBERTa*jsnli*'s high performance for Japanese NLI tasks has been shown in the literature (Yanaka and Mineshima, 2022). Note that while JSNLI is a ternary classification task, i.e. entailment, *contradiction*, and *neutral*, decision (ii) is binary, i.e. *positive* and *negative*. We, therefore, need to align JSNLI's labels with our binary labels. We experimented with two label alignment conditions: Condition A was to align entailment with *positive* and *contradiction* and *neutral* with *negative*. Condition B was the same as Condition A, except that we ignored *contradiction*; only *neutral* was aligned with *negative*. This is reasonable because, even if an embodiment is not claimed in a set of claims, it does not necessarily imply that the two pieces of text are contradictory. For the hyper-parameter setting and fine-tuning of RoBERTa*jsnli*, refer to Appendix B.5. The last three rows in Tab 2 labeled with §4.3 show F1 of the models on the test set. Although RoBERTauee was the best among them, it has a 9We used train_w_filtering.tsv of JSNLI 1.1. large room for improvement. This indicates that decision (ii) is difficult and is the bottleneck for UEE. Looking closely at the data revealed that a tiny fraction of text in a set of claims, which usually is a long document, tended to correspond to an embodiment (Figure 4), because each claim may consist of various technical features from more than one description paragraph. This would make decision (ii) challenging, together with the other factors discussed in §1; i.e. patent specifications consisting of technical, long, syntactically and semantically involved sentences written in *patentese*. RoBERTa*jsnli* delivered low performances under both conditions, probably because of the domain discrepancy between the NLI task in JSNLI and UEE. We think this result shows the necessity of a dedicated dataset for UEE. ## 4.4 Discussion The baselines delivered mediocre performances for UEE. We observed that decision (ii) makes UEE difficult. Nevertheless, we believe UEE is a worthy challenge, as it would eventually contribute to the industry by streamlining patent application processes. We also believe that, to this end, utilizing the outcomes of the current study would help. ## 5 Related Work Patent Document Processing NLP systems for real-world applications in, for instance, ecommerce (Malmasi et al., 2021, 2022), medical (Rumshisky et al., 2020; Naumann et al., 2022), and legal areas (Aletras et al., 2021; Preotiuc-Pietro et al., 2022) has gained attention, probably because it has become more feasible to serve practical needs thanks to the success of Transformer-based models and pre-training methods (Devlin et al., 2019). Patent document processing has also been studied extensively (Aras et al., 2019; Krestel et al., 2021, 2022). Its most studied areas are machine translation (Tsujii and Yokoyama, 2007; Utsuro et al., 2019; Nakazawa et al., 2021, 2022) and information retrieval (Tait et al., 2008, 2009, 2010; Risch et al., 2020). There have recently been studies that would directly facilitate patent applications. Sharma et al. (2019) created a dataset for summarizing patents and proposed baselines. Tonguz et al. (2021) proposed a method for claim generation formulated as text summarization. Aslanyan and Wetherbee (2022) created a dataset for phrasal matching for better patent similarity measurement. Gao et al. (2022) proposed a method for predicting whether a given patent application would be approved. However, to the best of our knowledge, no study has addressed UEE; we are the first to do that. Natural Language Inference NLI has showcased the comprehension ability of NLP systems (Wang et al., 2019; Nie et al., 2020; Poliak, 2020) and provided datasets for their training (Conneau et al., 2017; Reimers and Gurevych, 2019). Introducing diverse NLI tasks would then push the boundary of NLP. Recent studies have introduced new NLI tasks targeting real-world applications (Romanov and Shivade, 2018; Holzenberger et al., 2020; Koreeda and Manning, 2021; Sadat and Caragea, 2022). Our decision (ii) is a novel realworld NLI for patents that poses a new challenge. ## 6 Conclusion We introduced UEE, a novel NLP challenge, and created a corresponding dataset. Our experiments showed that UEE was challenging due to the difficulty of making the decision (ii). We hope that the research community will address this challenge by utilizing the UEE dataset and code that we created and released. Future Work We have not explored better architectures for the task extensively. Although RoBERTa performed reasonably well, more capable, human-instruction-aligned architectures have been developed recently (Bahrini et al., 2023; OpenAI, 2023). We will explore the capability of these more recent large language models for the task. ## 7 Ethics Statement The scope of this work is to introduce NLP technologies to the continuing patent application process to make it more efficient. The outcomes from our work would therefore have an industrial impact through enabling organizations to file more continuing patents with less time. There would then be a risk that, if our technologies were available to only particular organizations, fair competitions could not be ensured. We therefore decided to release the dataset and code to the public. This work was intended to be beneficial to patentrelated processes and studies in artificial intelligence, machine learning, and NLP. The outcomes from this work should therefore be used only for these purposes. The coverage of the UEE dataset in terms of the IPC subclass, language, and countries and regions are limited. Care must be taken when using this dataset, accordingly. The dataset does not contain any personal information, as it has been created from publicly available patent specifications. We nonetheless took special care to check if any personal information was included in the dataset by accident when creating the dataset. All the data we used in this work are publicly available. The pre-trained language model that we used, i.e. RoBERTa, is also publicly available. Our Longformer was converted from the RoBERTa with a method that was also known to the public. Besides, since we have released all the necessary code and dataset along with the paper, all the experimental results in the paper are reproducible. Regarding the hiring of the annotators (the expert patent practitioners), we negotiated with their company in advance to fairly determine the charge, which was the equivalent of the cost of hiring expert patent practitioners for patent search. We explained to the human annotators about the purpose of the data annotation and how it would be used in advance of the annotation. Regarding the compute in our experiments, we executed 30 fine-tuning processes, which took 57 hours in a single Nvidia A100 GPU in total. ## 8 Acknowledgment We are deeply grateful to all those who played a role in the success of this project. We would like to thank Yoshimasa Utsumi, Kazutaka Ohara, Takeshi Imamura, Hsuan-Yu Kuo, Takako Yoshida, Vijay Daultani, Colin Ian Rowat, and Aztec Co., Ltd. for their invaluable input and support throughout the research process. Their insights and expertise were instrumental in shaping the direction of this project. ## References Nikolaos Aletras, Ion Androutsopoulos, Leslie Barrett, Catalina Goanta, and Daniel Preotiuc-Pietro, editors. 2021. *Proceedings of the Natural Legal Language* Processing Workshop 2021. Association for Computational Linguistics, Punta Cana, Dominican Republic. Hidir Aras, Linda Andersson, Florina Piroi, Jianan Hou, Juan Carlos Gomez, Tobias Fink, Allan Hanbury, Benjamin Meindl, Ingrid Ott, Ulrich Theodor Zierahn, Tatyana Skripnikova, Anna Weihaar, Sebastian Blank, Hans-Peter Zorn, Farag Saad, Stefan Helfrich, and Mustafa Sofean. 2019. 1st Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech 2019). Grigor Aslanyan and Ian Wetherbee. 2022. Patents Phrase to Phrase Semantic Matching Dataset. In 3rd Workshop on Patent Text Mining and Semantic Technologies, PatentSemTech. Aram Bahrini, Mohammadsadra Khamoshifar, Hossein Abbasimehr, Robert J. Riggs, Maryam Esmaeili, Rastin Mastali Majdabadkohne, and Morteza Pasehvar. 2023. ChatGPT: Applications, Opportunities, and Threats. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Transformer. *arXiv:2004.05150*. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. *Educational and Psychological Measurement*, 20(1):37–46. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Gabriela Ferraro, Hanna Suominen, and Jaume Nualart. 2014. Segmentation of patent claims for improving their readability. In *Proceedings of the 3rd Workshop on Predicting and Improving Text Readability* for Target Reader Populations (PITR), pages 66–73, Gothenburg, Sweden. Association for Computational Linguistics. Xiaochen Gao, Zhaoyi Hou, Yifei Ning, Kewen Zhao, Beilei He, Jingbo Shang, and Vish Krishnan. 2022. Towards comprehensive patent approval predictions:beyond traditional document classification. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pages 349–372, Dublin, Ireland. Association for Computational Linguistics. Nils Holzenberger, Andrew Blair-Stanek, and Benjamin Van Durme. 2020. A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering. In *Proceedings of the 2020 Natural Legal* Language Processing (NLLP) Workshop. Yuta Koreeda and Christopher Manning. 2021. ContractNLI: A dataset for document-level natural language inference for contracts. In *Findings of the* Association for Computational Linguistics: EMNLP 2021, pages 1907–1919, Punta Cana, Dominican Republic. Association for Computational Linguistics. Ralf Krestel, Hidir Aras, Linda Andersson, Florina Piroi, Allan Hanbury, and Dean Alderucci. 2021. 2nd Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech2021). In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 2693–2696. Ralf Krestel, Hidir Aras, Allan Hanbury, Linda Andersson, Florina Piroi, and Dean Alderucci. 2022. 3rd Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech2022). In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. *Biometrics*, 33(1):159–174. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization. In *International Conference on Learning Representations*, ICLR. Shervin Malmasi, Surya Kallumadi, Nicola Ueffing, Oleg Rokhlenko, Eugene Agichtein, and Ido Guy, editors. 2021. *Proceedings of the 4th Workshop on eCommerce and NLP*. Association for Computational Linguistics, Online. Shervin Malmasi, Oleg Rokhlenko, Nicola Ueffing, Ido Guy, Eugene Agichtein, and Surya Kallumadi, editors. 2022. *Proceedings of the Fifth Workshop on* e-Commerce and NLP (ECNLP 5). Association for Computational Linguistics, Dublin, Ireland. Toshiaki Nakazawa, Hideya Mino, Isao Goto, Raj Dabre, Shohei Higashiyama, Shantipriya Parida, Anoop Kunchukuttan, Makoto Morishita, Ondˇrej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, Yusuke Oda, and Sadao Kurohashi. 2022. Overview of the 9th workshop on Asian translation. In *Proceedings of the 9th Workshop on Asian Translation*, pages 1–36, Gyeongju, Republic of Korea. International Conference on Computational Linguistics. Toshiaki Nakazawa, Hideki Nakayama, Isao Goto, Hideya Mino, Chenchen Ding, Raj Dabre, Anoop Kunchukuttan, Shohei Higashiyama, Hiroshi Manabe, Win Pa Pa, Shantipriya Parida, Ondˇrej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, Yusuke Oda, Katsuhito Sudoh, Sadao Kurohashi, and Pushpak Bhattacharyya, editors. 2021. *Proceedings of* the 8th Workshop on Asian Translation (WAT2021). Association for Computational Linguistics, Online. Tristan Naumann, Steven Bethard, Kirk Roberts, and Anna Rumshisky, editors. 2022. *Proceedings of the* 4th Clinical Natural Language Processing Workshop. Association for Computational Linguistics, Seattle, WA. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4885–4901, Online. Association for Computational Linguistics. ## Openai. 2023. Gpt-4 Technical Report. Adam Poliak. 2020. A survey on recognizing textual entailment as an NLP evaluation. In *Proceedings of* the First Workshop on Evaluation and Comparison of NLP Systems, pages 92–109, Online. Association for Computational Linguistics. Daniel Preotiuc-Pietro, Ilias Chalkidis, Catalina Goanta, Leslie Barrett, and Nikolaos Aletras, editors. 2022. Proceedings of the Natural Legal Language Processing Workshop 2022. Association for Computational Linguistics, Abu Dhabi, UAE. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Julian Risch, Nicolas Alder, Christoph Hewel, and Ralf Krestel. 2020. PatentMatch: A Dataset for Matching Patent Claims & Prior Art. *ArXiv*, abs/2012.13919. Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clinical domain. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1586–1596, Brussels, Belgium. Association for Computational Linguistics. Anna Rumshisky, Kirk Roberts, Steven Bethard, and Tristan Naumann, editors. 2020. *Proceedings of the* 3rd Clinical Natural Language Processing Workshop. Association for Computational Linguistics, Online. Mobashir Sadat and Cornelia Caragea. 2022. SciNLI: A corpus for natural language inference on scientific text. In *Proceedings of the 60th Annual Meeting of* the Association for Computational Linguistics (Volume 1: Long Papers), pages 7399–7409, Dublin, Ireland. Association for Computational Linguistics. Eva Sharma, Chen Li, and Lu Wang. 2019. BIGPATENT: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2204–2213, Florence, Italy. Association for Computational Linguistics. John Tait, Helmut Berger, Michael Dittenbach, and Mihai Lupu, editors. 2009. *Proceedings of the 2nd* International Workshop on Patent Information Retrieval, PaIR '09, Hong Kong, SAR, China, November 6, 2009. Association for Computing Machinery. John Tait, Helmut Berger, Michael Dittenbach, and Georg Sommer, editors. 2008. PaIR '08: Proceedings of the 1st ACM Workshop on Patent Information Retrieval. Association for Computing Machinery. John Tait, Christopher G. Harris, and Mihai Lupu, editors. 2010. Proceedings of the 3rd international workshop on Patent information retrieval. Association for Computing Machinery. Ozan Tonguz, Yiwei Qin, Yimeng Gu, and Hyun Hannah Moon. 2021. Automating claim construction in patent applications: The CMUmine dataset. In Proceedings of the Natural Legal Language Processing Workshop 2021, pages 205–209, Punta Cana, Dominican Republic. Association for Computational Linguistics. Jun'ichi Tsujii and Shoichi Yokoyama, editors. 2007. Proceedings of the Workshop on Patent translation. Copenhagen, Denmark. The United States Patent & Trademark Office USPTO. 2020a. Parts, Form, and Content of Application. In Manual of Patent Examining Procedure, chapter 600. The United States Patent & Trademark Office USPTO. 2020b. PCT Rule 13 Unity of Invention. Takehito Utsuro, Katsuhito Sudoh, and Takashi Tsunakawa, editors. 2019. Proceedings of the 8th Workshop on Patent and Scientific Literature Translation. European Association for Machine Translation, Dublin, Ireland. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In *Advances in Neural Information Processing Systems*, NeurIPS, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. World Intellectual Property Organization WIPO. 2022a. WIPO Patent Drafting Manual, Second Edition. World Intellectual Property Organization WIPO. 2022b. Worldwide IP Filings Reached New All-Time Highs in 2021, Asia Drives Growth. Hitomi Yanaka and Koji Mineshima. 2022. Compositional Evaluation on Japanese Textual Entailment and Similarity. Transactions of the Association for Computational Linguistics, 10:1266–1284. Takumi Yoshikoshi, Daisuke Kawahara, and Sadao Kurohashi. 2020. Multilingualization of natural language inference datasets using machine translation (in Japanese). In *The 244th Meeting of Natural Language Processing*. ![10_image_0.png](10_image_0.png) ## A Example Of The Uee Dataset Figure 5 illustrates an entry of the dataset. Each entry is a JSON object and consists of the application number of the patent from which the target paragraph is extracted (appNum), the identifier of the target paragraph (paraNum), the paragraph text (paraTxt), a set of claims from the same application (claims), the label (label), the note (note), the application number of the continuing patent (contAppNum), and the continuing patent's claims (contClaims). claims and contClaims consist of individual claims' identifier (claimNum) and text (claimTxt).10 Here, paraTxt, a set of claimTxts, and label correspond to pi,ji , Ci, and yi,ji in §2, respectively. Figure 6 shows the Japanese version of Figure 5. ## B Hyper-Parameter Setting B.1 Uee Baseline Rinna Roberta Although we described Rinna RoBERTa's hyperparameter setting for the baseline experiment in §4.1, we repeat it here for the sake of completeness. The maximum sequence length was 512. We fine-tuned Rinna RoBERTa on the UEE training 10claimTxt is a list of text. This is because a claim is usually long and split into segments for readability. We keep this structure in JSON format. ![10_image_1.png](10_image_1.png) Figure 6: Example data in Japanese ![10_image_2.png](10_image_2.png) set for ten epochs with the training batch size, the warm-up steps, and the learning rate being set to 128, 100, and 5e-5, respectively. We used AdamW (Loshchilov and Hutter, 2019) for optimization. ## B.2 Uee Baseline Longformer The maximum sequence length was 4,096. We fine-tuned our Longformer on the UEE training set for ten epochs with the training batch size, the gradient accumulation steps, the warm-up steps, and the learning rate being set to 16, 2, 200, and 2e-5, respectively, based on Beltagy et al. (2020). We used the AdamW optimizer. ## B.3 Decision (I) Roberta The hyper-parameter setting for this model is the same as the UEE Baseline Rinna RoBERTa in B.1. Sec. Model Accuracy Precision Recall F1 §4.1 RoBERTa **0.8979** (0.0025) 0.8453 (0.0047) **0.8899** (0.0065) **0.8670** (0.0034) Longformer 0.8258 (0.0147) **0.8839** (0.0072) 0.6157 (0.0508) 0.7247 (0.0335) §4.2 RoBERTa 0.9261 (0.0050) 0.9539 (0.0112) 0.8998 (0.0178) 0.9259 (0.0057) §4.3 RoBERTauee **0.7238** (0.0047) **0.7274** (0.0152) **0.7175** (0.0339) **0.7218** (0.0110) RoBERTa*jsnli* A 0.5067 (0.0044) 0.5146 (0.0123) 0.3874 (0.2847) 0.4029 (0.1434) RoBERTa*jsnli* B 0.5098 (0.0072) 0.4088 (0.2286) 0.4776 (0.2779) 0.4384 (0.2472) ## B.4 Decision (Ii) Robertauee The hyper-parameter setting for this model is the same as the UEE Baseline Rinna RoBERTa in B.1. ## B.5 Decision (Ii) Robertajsnli We fine-tuned RoBERTa*jsnli* for ten epochs with the training batch size, the warm-up steps, and the learning rate being 128, 500, and 3e-5, respectively. We used the AdamW optimizer. Although the instances in the JSNLI dataset have already been tokenized, we re-tokenized them with Rinna RoBERTa's tokenizer. ## C Japanese Example Of A Claimed Embodiment Figure 7 shows the Japanese version of Figure 4. ## D Full Evaluation Results Table 3 shows accuracy, precision, recall, and F1 of all the evaluated models.
imani-etal-2023-mathprompter
{M}ath{P}rompter: Mathematical Reasoning using Large Language Models
https://aclanthology.org/2023.acl-industry.4
Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose {`}MathPrompter{'}, a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chain-of-thought prompting technique to generate multiple algebraic expressions or python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the {`}MultiArith{'} dataset (78.7{\%} - 92.5{\%}) evaluated using 175B parameter GPT-based LLM.
# Mathprompter: Mathematical Reasoning Using Large Language Models Shima Imani, Liang Du, Harsh Shrivastava Microsoft Research, Redmond, USA Contact: shimaimani@microsoft.com ## Abstract Large Language Models (LLMs) have limited performance when solving arithmetic reasoning tasks and often provide incorrect answers. Unlike natural language understanding, math problems typically have a single correct answer, making the task of generating accurate solutions more challenging for LLMs. To the best of our knowledge, we are not aware of any LLMs that indicate their level of confidence in their responses which fuels a trust deficit in these models impeding their adoption. To address this deficiency, we propose 'MathPrompter', a technique that improves performance of LLMs on arithmetic problems along with increased reliance in the predictions. MathPrompter uses the Zero-shot chainof-thought prompting technique to generate multiple Algebraic expressions or Python functions to solve the same math problem in different ways and thereby raise the confidence level in the output results. This is in contrast to other prompt based CoT methods, where there is no check on the validity of the intermediate steps followed. Our technique improves over state-of-the-art on the MultiArith dataset (78.7% → 92.5%) evaluated using 175B parameter GPT-based LLM. ## 1 Introduction Recent advancements in natural language processing (NLP) can be attributed to massive scaling of Large Language Models (LLMs) (Vaswani et al., 2017; Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2022; Thoppilan et al., 2022). A very interesting recent discovery that the LLMs are naturally good (in-context) Zero-shot or few-shot learners turned out to be very useful (Brown et al., 2020; Liu et al., 2021, 2023). This led to the development of 'prompting' technique, where the user provides a small context for solving the task at-hand to the LLM. This conditioning of the models on a few examples is termed as few-shot prompting, while providing instructions to solve a task is known as Zero-shot prompting. Extensive research efforts are being poured into designing these prompts, either manually (Schick and Schütze, 2020; Reynolds and McDonell, 2021) or automatically (Shin et al., 2020; Gao et al., 2020). Although quite successful for single-step system-I tasks (Stanovich and West, 2000; Liu et al., 2023), the prompting techniques were inadequate in their performance on system-II tasks where multi-step reasoning is required (Rae et al., 2021). As humans, we tend to break down a problem and attempt to solve them step-by-step. Extending this intuition to LLMs led to the development of 'chain-of-thought' (CoT) prompting technique (Wei et al., 2022; Wang et al., 2022). The use of CoT has led to improved performance on a range of NLP tasks (Talmor et al., 2018; Gao et al., 2020; Patel et al., 2021; Cobbe et al., 2021; Geva et al., 2021; Chowdhery et al., 2022; Srivastava et al., 2022) In this work, we investigate Zero-shot-CoT methods for solving mathematical reasoning tasks. To the best of our knowledge, we found the recent work by (Kojima et al., 2022) that proposed a Zeroshot-CoT technique to be the state-of-the-art where they demonstrated a remarkable accuracy improvement on the 'MultiArith' (Roy and Roth, 2016) data (17.7% → 78.7%). Now, we identify two key aspects that lacks in the previous CoT prompting based SOTA, namely (1) Although, the chainof-thought followed by the model improved the results, but there is **no check on the validity of** the steps followed by the chain-of-thought prompting and (2) The **confidence in the predictions** of LLMs are often not provided. In order to address these gap to some extent, we derive inspiration from how we humans solve a math question by breaking it down to a simpler multi-step procedure and make use of multiple ways to validate our approach at each step. Specifically, given a question Q, (I) *Generating Algebraic template*: We first gen37 ![1_image_0.png](1_image_0.png) erate its corresponding Algebraic expression Qt that replaces the numerical entries by variables. (II) Math-prompts: Then, we provide multiple prompts P to the LLM that can solve Qt analytically in different ways. For eg. P can be 'Derive an Algebraic expression' or 'Write a Python function' etc. Following this procedure, we end up with P expressions that analytically solves Qt in terms of its variables. (III) *Compute verification*: We then evaluate the P analytical solutions by allotting multiple random values to the Qt variables. (IV) Statistical significance: If the solutions of the P analytical functions are in '*consensus*' over N ∼ 5 different variable choices, then we substitute the original values from Q to obtain the final solution. In the case where there is no definite consensus, we repeat the steps (II), (III) & (IV). Our method, MathPrompter, uses 175B parameter LLM called GPT3 DaVinci completion engine (Brown et al., 2020). We were able to improve the accuracy on the MultiArith data from 78.7% → 92.5%. ## 2 Method Since the LLMs are generative models, it becomes very tricky to ensure that the generated answers are accurate, especially for mathematical reasoning tasks. We take clues from the process followed by students to solve arithmetic problems. We narrowed down a few steps that students take in order ## To Verify Their Solutions, Namely - *Compliance with known results*: By comparing the solution to a known result, one can assess its accuracy and make necessary adjustments. This is particularly useful when the question is a standard problem with a well-established solution. - *Multi-verification*: By approaching a problem from multiple perspectives and comparing the results helps to confirm the validity of the solution and ensure that it is both sound and accurate. - *Cross-checking*: The process of solving a problem is just as necessary as the final answer. Verifying the correctness of the intermediate steps of the process provide a clear understanding of the thought process behind the solution. - *Compute verification*: Utilizing a calculator or computer to perform arithmetic calculations can assist in verifying the accuracy of the final answer. ## 2.1 Mathprompter Our proposed method, MathPrompter, is an attempt to transfer some of this thought process to the LLM answer generation process. Fig. 1 provides a high-level overview of steps followed by MathPrompter to solve a mathematical reasoning problem. We use the state-of-the-art GPT-3 DaVinci completion engine (Brown et al., 2020) for the question-answering tasks. We use the following question 'Q' from the MultiArith dataset to demonstrate the problem solving process followed by MathPrompter. Q: At a restaurant, each adult meal costs $5 and kids eat free. If a group of 15 people came in and 8 were kids, how much would it cost for the group to eat? (I) *Generating Algebraic template*: We begin by transforming the question into its Algebraic form by replacing the numeric entries with variables using a key-value mapping. In this particular instance, the modified question 'Qt' becomes: Qt: at a restaurant, each adult meal costs A and kids eat free. if a group of B people came in and C were kids, how much would it cost for the group to eat? Mapping: {A:5, B:15, C:8} (II) *Math-prompts*: We build up on the intuition provided by the multi-verification and crosschecking thought processes mentioned above. We generate analytical solutions of Qt using two different approaches, Algebraic way and Pythonic way. We give the following prompts to the LLM to generate additional context for Qt Algebraic prompt: Write a mathematical equation and generate the answer format starting with 'Answer =' Python prompt: Write a Python function that returns the answer. The LLM model in response to the above prompts generated the following output expressions \# Algebraic expression output Answer = A*(B-C) # Python expression output # `def total_price(A, B, C):` $\begin{array}{c}\text{return A*(B-C)}\end{array}$ The above generated analytical solutions gives the user a hint into the 'intermediate thought process' of the LLM. Incorporating additional prompts will improve the accuracy and consistency of the results. This will, in turn, enhance the MathPrompter's ability to generate more precise and effective solutions. (III) *Compute verification*: We evaluate the expressions generated in the previous step using multiple randomized key-value mappings of the input variables in Qt. To evaluate the expressions, we used the Python's eval() method. We compare the outputs to see if we can find a consensus among the answers. This also provides us with a higher level of **confidence** that the answers are correct and reliable. Once the expressions agree on their outputs, we use the values of the variables in the input Q to compute the final answer, as below Algebraic-answer = 35 Pythonic-answer = 35 (IV) *Statistical significance*: In order to ensure that consensus is reached among various expressions' output, in our experiments, we repeat the steps (II) & (III) for N ∼ 5 times and report the most frequent value observed for the answer. ## 3 Experiment 3.1 Dataset We evaluate MathPrompter on MultiArith dataset (Roy and Roth, 2016), which is a subset of the Math World Problem Repository (Koncel-Kedziorski et al., 2016). This dataset is a collection of mathematical problems that are specifically designed to test the ability of machine learning models to perform complex arithmetic operations and reasoning. These problems demand the application of multiple arithmetic operations and logical reasoning to be sucessfully solved. ## 3.2 Baseline One of the popular baselines is the standard Zeroshot model by (Brown et al., 2020). Their train their models in a way that it is able to recognize and classify new objects or classes that it has never seen before during training. This was achieved by utilizing the semantic relationships between classes. We also compared against the state-of-the-art Zero-shot-CoT prompting model by (Kojima et al., 2022). This is a very recent approach that addresses the limitations of the standard Zero-shot learning by incorporating a 'context of the task' using CoT to improve the performance. Briefly, their method follows this procedure. Given a question Q, the authors use the prompt 'Lets think step-by-step' followed by Q to generate a response Z. Then, they use the prompt 'The answer (Arabic numericals) is' followed by Z to get their final result. | Model | Accuracy | |----------------------------------------------|------------| | Zero-shot | 17.7 | | Zero-shot (PaLM 540B) | 25.5 | | Zero-shot-CoT | 78.7 | | Zero-shot-CoT (PaLM 540B) | 66.1 | | Zero-shot-CoT + self consistency (PaLM 540B) | 89.0 | | Zero-shot-CoT (MathPrompter) | 92.5 | | Few-Shot (2 samples) | 33.7 | | Few-Shot (8 samples) | 33.8 | | Few-Shot-CoT (2 samples) | 84.8 | | Few-Shot-CoT (4 samples) | 90.5 | | Few-Shot-CoT (8 samples) | 93.0 | | Zero-Plus-Few-Shot-CoT (8 samples) | 92.8 | Table 1: **Accuracy on MultiArith dataset**. MathPrompter outperforms all the Zero-shot & Zero-shot-CoT baselines. We emphasize that our model's performance is comparable to 540B parameter models as well as the SOTA Few-shot-CoT approaches. (If not mentioned explicitly, the models in each row consists of 175B parameters. Results are borrowed from (Kojima et al., 2022). They used Textdavinci-002 (175B) model along with the same 8 examples as described in (Wei et al., 2022) for Few-shot and Few-shot-CoT settings.) ## 3.3 Results 3.3.1 Accuracy Comparisons Table 1 compares the performance of the MathPrompter against the baseline models. The results of few-shot & zero-shot learning based approaches are shown. Furthermore, we add the results for models with different number of parameters to get better highlight the significance of our approach. Since, MathPrompter is a Zero-shot-CoT (175B parameters) method, we choose the state-of-the-art Zero-shot-CoT (175B parameters) model by (Kojima et al., 2022) and a Zero-shot(175B parameters) by (Brown et al., 2020) for fair comparison. We report an accuracy of 92.5% which is a huge improvement to the other SOTA models with 78.7% and 17.7% accuracy, respectively. ## 3.3.2 Example Comparisons Table 2 presents a sample set of questions and their respective outputs, intermediate steps, and final answers generated by both MathPrompterand the current state-of-the-art model (Kojima et al., 2022). For simplicity, only one output of MathPrompter for each question is shown for both the Algebraic and Pythonic outputs. The table highlights areas where (Kojima et al., 2022) technique falls short, and where these can be remedied with MathPrompter , which was designed to address these issues. For example, the generated answers sometimes have one step of error, which can be avoided by running the model multiple times and reporting the consensus results. Additionally, the reasoning steps in (Kojima et al., 2022) can be excessively lengthy, but the Pythonic or Algebraic methods can address this by typically requiring fewer tokens. Furthermore, the reasoning steps may be correct, but the final computation is incorrect. MathPrompter address problem by using the Python's eval() method function. In many cases, the MathPrompter generates correct intermediate and final answers. However, there are a few cases, such as the last question in Table 2, where both the Algebraic and Pythonic outputs are in agreement, yet erroneous. We plan to address these issues by incorporating additional methods to further enhance the performance of MathPrompter . ## 4 Conclusions & Discussions We introduced MathPrompter, a novel approach that improves LLM performance on mathematical reasoning problems. It also addresses an important concern of building the user trust to some extent in the LLM predictions. We translated our intuition on how students solve arithmetic problems to a LLM model by utilizing the Zero-shot chain-of-thought prompting technique. MathPrompter incorporates ideas like cross-checking the intermediate steps and solving the same math problem using multiple approaches in its design. We empirically show that our model is comparable to SOTA Few-shot-CoT models as well as the larger Zero-shot-CoT models that have 540B parameters. In future, we plan to further evaluate performance on additional datasets and explore incorporating additional prompts into | Example-based comparison between current state of art Zero-shot-CoT and MathPrompter . | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------| | Question | SOTA Zero-shot-CoT | MathPrompter | | Q: At the fair Adam bought 13 tickets. After riding the ferris wheel he had 4 tickets left. If each ticket cost 9 dollars, how much money did Adam spend riding the ferris wheel? | Mapping = {A:13, B:4, C:9} # Algabraic answer Answer = A*(B-C) #Python code def total_price(A, B, C): return A * (B-C) Answer = 81 ✓ | | | Q: Each chocolate bar in a box cost $3. If a box had 9 bars total and Wendy sold all but 3 bars, how much money would she have made? | A (ONE UNNECESSARY STEP ERROR): Let's think step by step. Adam bought 13 tickets. That means he spent 13 * 9 = 117 dollars on tickets. After riding the ferris wheel, he had 4 tickets left. That means he used 9 tickets to ride the ferris wheel. That means he spent 9 * 9 = 81 dollars on riding the ferris wheel. So the answer is 117 - 81 = 36 dollars. Therefore, the answer (arabic numerals) is 36. ✗ A (OTHERS; MAXIMUM LIMIT OF TEXT LENGTH): Let's think step by step. Each chocolate bar in a box costs $3. If a box had 9 bars total, then Wendy sold all but 3 bars. How much money would she have made? We can solve this problem using algebra. Let's start by creating a variable to represent the number of chocolate bars Wendy sold. We'll call this variable "x". If Wendy sold "x" chocolate bars, then she would have 9 - x chocolate bars left in the box. We know that each chocolate bar costs $3, so the total cost of the chocolate bars Wendy sold Therefore, the answer (arabic numerals) is: Wendy would have made $12. ✗ | Mapping = {A:3, B:9, C:3} # Algabraic answer Answer = A x (B - C) #Python code def money_made(A, B, C): return (B-C)*A Answer = 18 ✓ | | Q: Jerry was helping the cafeteria workers pick up lunch trays, but he could only carry 8 trays at a time. If he had to pick up 9 trays from one table and 7 trays from another, how many trips will he make? | Mapping = {A:8, B:9, C:7} # Algabraic answer Answer = (B+C)/A #Python code def numTrips(A, B, C): trips = (B + C) // A if (B + C) % A != 0: trips += 1 return trips Answer = 2 ✓ | | | Q: Kaleb bought 14 boxes of chocolate candy and gave 5 to his little brother. If each box has 6 pieces inside it, how many pieces did Kaleb still have? | A (COMMONSENSE MISTAKE): Let's think step by step. Jerry can carry 8 trays at a time. There are 9 trays on the first table. That means Jerry will need to make 2 trips to the first table. There are 7 trays on the second table. That means Jerry will need to make 1 trip to the second table. In total, Jerry will make 3 trips. Therefore, the answer (arabic numerals) is 3. ✗ A: Let's think step by step. Kaleb bought 14 boxes of chocolate candy. This means that he had 84 pieces of chocolate candy. He gave 5 boxes to his little brother, which means that he gave his little brother 30 pieces of chocolate candy. Kaleb still had 54 pieces of chocolate candy. Therefore, the answer (arabic numerals) is 54 ✓. | Mapping = {A:14, B:5, C:6} # Algabraic answer Answer = A*C - B #Python code def candy_pieces(A, B, C): return A*C - B Answer = 79 (COMMONSENSE MISTAKE)✗ | MathPrompter. ## 5 Limitation One of the limitations of our work is that while we are running the MathPrompter multiple times in different ways to increase the accuracy of our results, this does not always guarantee the correctness of the output. Both Algebraic and Pythonic expressions have the potential to produce the incorrect results, even if the prompt outputs match each other. This is the fail case as shown in the last row of Table 2. Increasing the number of prompts will mitigate this issue. We are currently investigating techniques that can address this issue in a more principled manner. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. *arXiv preprint arXiv:2110.14168*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. *arXiv preprint arXiv:2012.15723*. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. *Transactions of the* Association for Computational Linguistics, 9:346– 361. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. *arXiv preprint* arXiv:2205.11916. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. Mawps: A math word problem repository. In Proceedings of the 2016 conference of the north american chapter of the association for computational linguistics: human language technologies, pages 1152–1157. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? *arXiv* preprint arXiv:2101.06804. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? *arXiv preprint* arXiv:2103.07191. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*, 21(1):5485–5551. Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–7. Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. *arXiv preprint* arXiv:1608.01413. Timo Schick and Hinrich Schütze. 2020. It's not just size that matters: Small language models are also few-shot learners. *arXiv preprint arXiv:2009.07118*. Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. Keith E Stanovich and Richard F West. 2000. 24. individual differences in reasoning: Implications for the rationality debate? *Behavioural and Brain Science*, 23(5):665–726. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. *arXiv preprint arXiv:1811.00937*. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *arXiv preprint arXiv:2201.08239*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. *arXiv preprint arXiv:2203.11171*. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*.
kachuee-lee-2023-constrained
Constrained Policy Optimization for Controlled Self-Learning in Conversational {AI} Systems
https://aclanthology.org/2023.acl-industry.5
Recently, self-learning methods based on user satisfaction metrics and contextual bandits have shown promising results to enable consistent improvements in conversational AI systems. However, directly targeting such metrics by off-policy bandit learning objectives often increases the risk of making abrupt policy changes that break the current user experience. In this study, we introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints. For example, we may want to ensure fewer policy deviations in business-critical domains such as shopping, while allocating more exploration budget to domains such as music. We present a novel meta-gradient learning approach that is scalable and practical to address this problem. The proposed method adjusts constraint violation penalty terms adaptively through a meta objective that encourages balanced constraint satisfaction across domains. We conducted extensive experiments on a real-world conversational AI and using a set of realistic constraint benchmarks. The proposed approach has been deployed in production for a large-scale commercial assistant, enabling the best balance between the policy value and constraint satisfaction rate.
# Constrained Policy Optimization For Controlled Self-Learning In Conversational Ai Systems Mohammad Kachuee, Sungjin Lee Amazon Alexa AI {kachum, sungjinl}@amazon.com ## Abstract Recently, self-learning methods based on user satisfaction metrics and contextual bandits have shown promising results to enable consistent improvements in conversational AI systems. However, directly targeting such metrics by off-policy bandit learning objectives often increases the risk of making abrupt policy changes that break the current user experience. In this study, we introduce a scalable framework for supporting fine-grained exploration targets for individual domains via user-defined constraints. For example, we may want to ensure fewer policy deviations in business-critical domains such as shopping, while allocating more exploration budget to domains such as music. We present a novel meta-gradient learning approach that is scalable and practical to address this problem. The proposed method adjusts constraint violation penalty terms adaptively through a meta objective that encourages balanced constraint satisfaction across domains. We conducted extensive experiments on a realworld conversational AI and using a set of realistic constraint benchmarks. The proposed approach has been deployed in production for a large-scale commercial assistant, enabling the best balance between the policy value and constraint satisfaction rate. ## 1 Introduction Conversational AI systems such as Apple Siri, Amazon Alexa, Google Assistant, and Microsoft Cortana rely on multiple processing components for speech recognition, natural language understanding (NLU), taking proper actions, and generating a response to the user. In such a system, a skill routing block is responsible for selecting the right skill and NLU interpretation to serve the request. Skill routing is a challenging problem as thousands of skills are present in real-world conversational systems and new skills are being introduced every day. In such scenario, gathering human annotations is very expensive and suffers from high turn-around times. Moreover, often more than one skill is capable of serving a request which makes human supervision even more challenging due to the lack of clear ground-truth assignments (Sarikaya, 2017). Recently, self-learning methods have been proposed that leverage customer experience signals to define reward values and create a closed feedback loop (Karampatziakis et al., 2019). In contrast to more traditional methods that are based on replication of rule-based systems or defect relabeling (Park et al., 2020), self-learning methods continuously explore different routing alternatives and leverage user feedback to improve their decisions (Kachuee et al., 2022). Despite their scalability and efficiency, because self-learning approaches directly optimize routing decisions to achieve highest rewards, they suffer from instability issues impacting the user experience. Specifically, off-policy contextual bandits frequently used as the policy learning algorithm are susceptible to off-policy optimization errors, resulting in potentially breaking the current user experience due to overestimation of action values or excessive explorations (Swaminathan et al., 2016; Joachims et al., 2018; Lopez et al., 2021). Such instabilities and drastic changes in the agent's behavior not only regress user retention and trust, but also manifest as direct revenue loss for businesscritical domains such as shopping. In a production system, it is crucial to not only estimate but also control the changes of behavior a new policy introduces when compared to the current production policy. In the literature, this problem has been studied under safe bandit updates (Jagerman et al., 2020; Daulton et al., 2019; Amani et al., 2019) and budgeted bandit learning (Hoffman et al., 2014; Guha and Munagala, 2007), usually targeting exploration budgets or encouraging a behavior resembling a baseline policy. In the context of off-policy bandit updates, we define exploration as any change in the model be43 havior resulting from replacing a current production policy with a new updated policy. This definition is broad and encloses stochastic exploration actions as well as any behavior change when comparing the two consecutive policies. Furthermore, we consider the scenario in which samples are naturally classified into a set of domains, each representing a unique data segment. Note that in a task-oriented conversational agent, domains are typically defined based on NLU interpretation of the request (e.g. music, shopping, books). While previous studies considered different aspects of constraining a bandit model, to the best of our knowledge the problem of controlling offpolicy bandit behavior changes across subsequent model updates with a fine-grained control on budgets for different data segments (domains) remains unaddressed. This study is the first to tackle the aforementioned issues by providing a scalable and practical approach. The main contributions of this paper are as follows: (i) Introducing a formulation for controlled exploration in the context of offpolicy contextual bandit learning considering finegrained control over domains based on user-defined constraints. (ii) Presenting a solution based on the primal-dual minimax constraint optimization method that is effective but requires adjusting a few hyperparameters. (iii) Proposing a novel meta gradient algorithm to balance constraint satisfaction and reward maximization that works out-of-the-box and outperforms other methods with no need for hyperparameter adjustment. (iv) Conducting extensive online and offline experiments on the skill routing problem in a real-world conversational AI agent using a set of realistic constraint benchmarks. ## 2 Related Work 2.1 Skill Routing In Conversational Ais In contrast to traditional rule-based systems, modelbased skill routing approaches leverage machine learning models to understand a user request and predict the best action to serve the request (Li et al., 2021; Park et al., 2020). To improve scalability, self-learning methods have been proposed that rely on user feedback rather than human annotations to learn and improve their skill routing policies in a closed-loop. The recent work by Kachuee et al. (2022) is an excellent example of such approach in which modelbased customer satisfaction metrics (Kachuee et al., 2021) are used to define the reward function, then a stochastic mixture of replication and bandit models is used to control the exploration rate and safeguard the user experience. Nonetheless, such design may result in sub-optimal decisions as the bandit optimization does not consider the exploration budgets, the stochastic mixture may not be sufficiently finegrained to protect user experience in smaller traffic segments, and deploying such architecture requires dealing with additional complexity of maintaining a separate replication model. ## 2.2 Constrained Bandit Learning The majority of studies on controlled bandit learning consider the case of simple multi-armed stochastic bandits (i.e., without context) with practical applications in experiment design (Guha and Munagala, 2007) and automated machine learning (Hoffman et al., 2014). Hoffman et al. (2014) suggested a Bayesian approach to two-phase exploration-exploitation bandit learning in which there is a pre-specified budget for exploration arm evaluations. Another aspect is to ensure safe exploration actions, which is especially useful for sensitive applications in industrial machine learning or healthcare. Amani et al. (2019) introduced a solution in which an initial set of exploration actions is defined, then the exploration set is gradually expanded to ensure minimal unexpected behavior. For contextual bandits, constraints can be defined in the action space or in terms of model updates. For example, Daulton et al. (2019) solves a two-metric setting in which the reward is being maximized while enforcing a limit for regression on an auxiliary metric compared to a baseline status quo model. Balakrishnan et al. (2018) attempts to learn behavioral constraints by balancing between replication of the current baseline policy and making new actions that show promising rewards. In (Jagerman et al., 2020) authors define safety in terms of user experience metrics and suggest deciding on deploying a new model based on conservative confidence bounds on the off-policy estimates of such metrics. ## 3 Constrained Bandit Exploration 3.1 Problem Formulation We consider the general framework of off-policy contextual bandits in which a policy Π is used to select an action a ∈ A given the observed context vector (x) to maximize the scalar reward (r) received from the environment. Here, we assume stochastic policies of the form Πθ(a|x) in which a model parameterized by θ (e.g., a neural network) is used to assign action selection probabilities to each action given the context. Furthermore, we assume that each sample belongs to a domain denoted by k ∈ 1 *. . . M* provided as a feature in x. In the off-policy setting, the policy is refreshed after collecting a dataset of samples from the current policy. We adopt a definition of exploration which considers any change in the agent behavior compared to the current policy as an exploration action. Alternatively, we can consider replication with respect to the current policy as the rate at which the new policy makes similar decisions to the current policy when both evaluated and sampled stochastically. We define replication for Πθ with respect to Π0 based on the L1-distance of their action propensities given a context x: $${\mathcal R}_{\theta}(\mathbf{x})=1-{\frac{|\Pi_{\theta}(\mathbf{x})-\Pi_{0}(\mathbf{x})|_{1}}{2}}\ .$$ In a production system, it is desirable to precisely control the rate at which the new policy replicates the current policy for each domain. This ensures robust and controlled model updates for critical domains while enabling exploration for others that may benefit from an extra exploration budget. Accordingly, we define constraints to encourage the desired behavior for samples of each domain, while learning an off-policy bandit: $$\begin{array}{r l}{{\operatorname{arg\,min}_{\theta}}}&{{\mathbb{E}_{\mathbf{x},a,r,k\sim\mathbb{D}}\;\;L_{\Pi_{\theta}}\;\;,}}\\ {{s.t.}}&{{c_{k}^{m i n}\leq\mathcal{R}_{\theta}(\mathbf{x})\leq c_{k}^{m a x}}}\end{array}$$ $$(2)$$ where context, action, reward, and domain (x*, a, r, k*) are sampled from a dataset collected from the current policy. In (2), we use c min k and c max kto indicate user-defined replication constraints for domain k. LΠθ can be any differentiable off-policy bandit learning objective, for simplicity of discussion, we consider the vanilla inverse propensity scoring (IPS) objective: $$L_{\Pi_{\theta}}({\bf x},a,r)=-r\frac{\Pi_{\theta}(a|{\bf x})}{\Pi_{0}(a|{\bf x})}\;\;,\qquad\quad(3)$$ where Π0 is the current policy and r is the observed reward for taking action a collected in the dataset. A common approach to optimize constrained problems such as (2) is to use the penalty method, translating constraints into penalty terms that encourage constraint satisfaction: $$\arg\operatorname*{min}_{\theta}\;\;\mathbb{E}_{\mathbf{x},a,r,k\sim\mathbb{D}}\;\;[L_{\Pi_{\theta}}(\mathbf{x},a,r)\;+\;1]$$ e uk max(0, cmin k − Rθ(x)) + e vk max(0, Rθ(x) − c max k)] . (4) Here, penalty terms are always non-negative and increase if the new policy assigns action probabilities that deviate from the current policy outside the desired boundary. u ∈ RM and v ∈ RM are variables that adjust the weight of each constraint violation term. The exponentiation improves the sensitivity to these parameters and ensures having nonnegative penalty terms. For (4) to actually solve the original constrained problem of (2), proper values for u and v need to be used that enable the best balance between constraint satisfaction and the policy value. In the constrained optimization literature, various methods have been suggested to solve this form of problem. In this paper, to solve this problem, we use the primal-dual minimax method suggested by Nandwani et al. (2019) (Section 3.2) as well as a novel meta-learning method (Section 3.3). $$(1)$$ ## 3.2 Minimax Primal-Dual Method Nandwani et al. (2019) suggested a formulation of the augmented Lagrangian method that supports inequality constraints. They solve the dual problem which is optimizing the dual maximin problem to improve the scalability: $$\begin{array}{c c c}{{\operatorname*{min}_{\theta}}}&{{\operatorname*{max}_{{\bf u},{\bf v}}}}&{{\mathbb{E}_{{\bf x},a,r,k\sim\mathbb{D}}\ \left[L_{\Pi_{\theta}}({\bf x},a,r)+\right.}}\\ {{}}&{{}}&{{\left.e^{{\bf u}_{k}}\operatorname*{max}(0,c_{k}^{m i n}-{\mathcal{R}}_{\theta}({\bf x}))+\right.}}\\ {{}}&{{}}&{{\left.e^{{\bf v}_{k}}\operatorname*{max}(0,{\mathcal{R}}_{\theta}({\bf x})-c_{k}^{m a x})\right]\ .}}\end{array}\tag{5}$$ Algorithm 1 shows an outline of the policy training using the minimax method. This method has four hyperparameters controlling the max player optimization via adjusting the update frequency, learning rate, and decay factors. Intuitively, the min player is trying to update the policy while the max player is increasingly penalizing it for any constraint violation. A stable point of this algorithm would be to gradually reduce the max player update rate as the min player is getting better at satisfying the constraints, eventually satisfying all constraints resulting in a zero loss for the max player due to the zero hinge penalty terms. Algorithm 1: Minimax constrained bandit input :D (dataset), η (max learning rate), γ (max learning rate decay), τ (max update frequency), ξ (max update frequency decay) u, v, t ← 0; Initialize(Πθ) for $\mathbf{x},a,r,k\sim D\,\mathbf{d}\mathbf{o}$ $/\star$ loss function of (5) $L\gets Loss(\mathbf{x},a,r,k,\theta,\mathbf{u},\mathbf{v})$ if $\theta\%$ is $0$ then $/\star$ gradient ascent, max player $\mathbf{u}\leftarrow\mathbf{u}+\eta\nabla_{\mathbf{u}}L$ $\mathbf{v}\leftarrow\mathbf{v}+\eta\nabla_{\mathbf{v}}L$ $/\star$ 1$r/$update decay $\eta\leftarrow\gamma\times\eta$ $\tau\leftarrow\xi\times\tau$ end $/\star$ optimize $\Pi_{\theta}$, min player $\theta\gets f(\theta,\nabla_{\theta}L)$ $/\star$ increment counter $t\gets t+1$ ## 3.3 Meta Gradient Method ![3_Image_0.Png](3_Image_0.Png) Theoretically, the primal-dual minimax method is capable of achieving Pareto optimal solutions (Jin et al., 2019; Nandwani et al., 2019). However, in practice, it is infeasible to train for an infinite number of iterations, and therefore approximate inner optimization loops are being used. To find the right balance between constraint satisfaction and policy improvement for the minimax algorithm, it is necessary to carefully adjust multiple hyperparameters. Note that an extensive hyperparameter search is undesirable in many real-world scenarios as it entails not only significant compute costs associated with the search but also increases the turn-around time to deploy refreshed models. To mitigate this issue, we suggest a meta-gradient optimization idea that adapts u and v based on a meta objective within the training process. Specifically, we define the the meta objective as: $$\begin{array}{c}{{L_{m e t a}=\mathbb{E}_{\mathbf{x},a,r,k\sim\mathbb{D}}\ \left(1-\lambda\right)L_{\Pi_{\theta}}(\mathbf{x},a,r)+}}\\ {{\lambda\frac{\operatorname*{max}(0,c_{k}^{m i n}-\mathcal{R}_{\theta}(\mathbf{x}))+\operatorname*{max}(0,\mathcal{R}_{\theta}(\mathbf{x})-c_{k}^{m a x})}{p(k)}}}\end{array}$$ (6) where λ is a hyperparameter to balance between the bandit objective and the constraint penalty terms. The second term is the macro average of violation penalties, in which p(k) is the prior probability of samples belonging to domain k that can be easily pre-computed for a large batch of samples. Note that (6) is not directly dependent on u and v, instead we rely on online cross-validation (Sutton, 1992; Xu et al., 2018) to update these variables. * [10] M. C. We define an inner objective the same as the min optimization problem of (5), do a differentiable optimization step, evaluate the meta objective on another batch of data, then update u and v by taking the derivative of the meta objective through the inner optimization trace. Algorithm 2 presents an outline of the meta gradient optimization method. Due to practical issues of dealing with high-order gradients, we only consider the immediate impact of a single inner loop update on the meta objective. We found that discarding the vanilla gradient descent used for the inner optimization and using a more advanced optimizer (e.g., Adam) to update Πθ works best. Regarding the λ hyperparameter, we found that simply setting λ = 1 works well in practice. It effectively means that the meta-gradient solution does not require any hyperparameter adjustments (experimental evidence presented in Section 4.4). Algorithm 2: Meta-grad constrained bandit input :D (dataset), η (learning rate), λ (penalty weight) u, v ← 0; Initialize(Πθ) for x, a, r, k ∼ D and x ′, a′, r′, k′ ∼ D do /* clone parameters */ θ ′ ← clone(θ) /* inner loss with θ ′*/ Linner ← Lossinner(x, a, r, k, θ′, u, v) /* grad. descent on cloned model */ θ ′ ← θ ′ − η∇θ′Linner /* compute meta loss */ Lmeta ← Lossmeta(x ′, a′, r′, k′, θ′, λ) /* diff. through inner update */ Compute ∇uLmeta and ∇vLmeta /* use any optimizer for u, v */ u ← f(u, ∇uLmeta) v ← f(v, ∇vLmeta) /* inner loss with θ */ L ← Lossinner(x, a, r, k, θ, u, v) /* use any optimizer for Πθ */ θ ← f(θ, ∇θL) end Intuitively, at each training iteration, the inner objective naturally minimizes the bandit loss that is penalized by constraint violation terms proportional to the current u/v. Then, the meta objective computes a validation loss that measures the impact of the inner policy update and u/v on the macro-averaged constraint violations. Finally, by computing the meta-gradient of the meta objective through the inner optimization loop, u and v are updated to better encourage the constraint satisfaction for the next policy update iteration. Thanks to the online cross-validation update for u and v, the ![4_image_0.png](4_image_0.png) meta-gradient method adjusts the penalty weights such that their value does not unnecessarily keep increasing when it does not result in further improvements to the constraint satisfaction. ## 4 Experiments 4.1 Setup In a commercial dialogue agent, making controlled policy updates is crucial because any change in the skill routing policy directly impacts the user experience. Making abrupt policy changes may negatively impact user retention and in certain businesscritical domains may result in loss of revenue. Figure 1 shows an overview of the model architecture used in our experiments. Input to the model is a set of routing candidates i.e., a combination of embedded ASR, NLU, and context vectors as well as skill embeddings. The output is the softmaxnormalized propensity of selecting each candidate to handle the user request. The final model has about 12M trainable parameters consisting of a language model to encode utterance, embeddings for contextual signals, and fully-connected layers. As our architecture closely follows the design choices from Kachuee et al. (2022), we refer interested readers to that paper for details. To train and evaluate our models, we use logged data from a current production policy. The observed reward is based on a curated function of user satisfaction metrics (Kachuee et al., 2021). Our dataset consists of about 40M samples divided into 85% training, 10% validation, and 5% test sets covering 27 domains with imbalanced number of samples. Data used in this work was deidentified to comply with our customer privacy guidelines. ## 4.2 Benchmarks In our experiments, we use three different benchmarks for the constraint settings: global, critical, and exploration. The global benchmark aims to constrain the new policy to be within an exploration limit for all domains. In addition to the global constraint, critical assert stronger limits for a set of critical domains defined based on the expert knowledge. The exploration benchmark extends the critical benchmark by adding constraints to encourage exploration for domains that may benefit from additional exploration. Each benchmark is a list of constraints consisting of a short description, applicable domain, and the desired replication range. We provide the exact constraint configurations in the appendix. ## 4.3 Baselines And Metrics As the first baseline, we consider the vanilla IPS objective which disregards the constraints. Additionally, we build on the IPS baseline for constraint optimization approaches: quadratic (uniform constant penalty weight), minimax (Algorithm 1), and meta-gradient (Algorithm 2). Unless expressed otherwise, we use Adam optimizer with the default configuration (Kingma and Ba, 2014) . Regarding hyperparameters, for the penalty weight of the quadratic method we use values from {0.1, 1, 10, 100, 1000}. For the minimax method (Algorithm 1), we found that setting τ and ξ to one while adjusting η and γ presents very similar results to adjusting all four hyperparameters. Consequently, we use a grid search of η ∈ {1, 0.1, 0.01} and γ ∈ {1, 0.999, 0.995} to find the best settings for each benchmark. For the meta-gradient method (Algorithm 2), we found that simply using λ equal to one in the meta objective (i.e., meta objective only focusing on the constraints) outperforms other works. As a result, it does not require adjusting any hyperparameter and the same setting is used across all benchmarks. We provide additional experiment details, sensitivity analysis, and the final hyperparametersin Appendix A.2, A.3, and A.4. Regarding the evaluation metrics, we use the expected off-policy reward as well as the rate of change in constraint violations averaged over all samples (i.e., micro-averaged) and individual domains (i.e., macro-averaged). To comply with our privacy and business guidelines, in all instances, we only report relative and normalized results which do not represent the actual scales or metric values. | Benchmark | | | | | | | | | | |-------------|------------|---------------------|------------|---------------------|------------|---------------------|------------|------------|------------| | global | critical | explore | | | | | | | | | Method | reward | violation reduction | reward | violation reduction | reward | violation reduction | | | | | (%) | macro (%) | micro (%) | (%) | macro (%) | micro (%) | (%) | macro (%) | micro (%) | | | IPS | 89.45±0.01 | 0 | 0 | 89.45±0.01 | 0 | 0 | 89.45±0.01 | 0 | 0 | | Quadratic | 88.95±0.01 | 63.67±0.46 | 63.67±0.46 | 88.94±0.01 | 50.13±0.90 | 69.29±0.67 | 88.36±0.04 | 28.37±4.62 | 65.24±2.30 | | Minimax | 88.91±0.01 | 63.28±0.08 | 63.28±0.08 | 88.93±0.01 | 37.88±0.49 | 62.51±0.21 | 88.11±0.01 | 61.51±0.59 | 81.50±0.24 | | MetaGrad | 88.94±0.01 | 75.91±0.49 | 75.91±0.49 | 88.94±0.01 | 60.63±0.95 | 79.69±0.85 | 88.41±0.01 | 78.23±0.17 | 89.95±0.20 | | Method | reward | Violation Reduction | Replication | |----------|----------------|-----------------------|---------------| | (%) | (%) | (%) | | | RPDR | -0.01 (p>0.05) | 0 | 98.13 | | MetaGrad | +0.19 (p<0.05) | 38.05 | 99.11 | ## 4.4 Results 4.4.1 Offline Experimentation Table 1 shows a comparison of results for the IPS, quadratic, minimax, and meta-gradient methods on all benchmarks. For each case, we report the expected reward and the percentage of reduction in the rate of violations compared to the simple IPS objective. The meta-gradient approach consistently shows the best results across all benchmarks. The simple quadratic method behaves very competitively to minimax, except for the explore benchmark which requires a more fine-grained control on multiple constraints . The meta-gradient method, while having the highest reduction in constraints violations, also has very competitive performance in terms of the reward metric. ## 4.4.2 Online Experimentation We conducted an A/B experiment to compare the proposed method with the stochastic gating method of Kachuee et al. (2022) for robust self-learning (indicated by RPDR in the table). We conducted our A/B in two phases, deploying and comparing each approach to a baseline skill routing production system. Each phase took one week and consisted of traffic from about 6M customers (3M control and 3M treatment). For the RPDR method, we used a target replication rate of 99% for each domain. The meta-gradient model was trained with the global benchmark, constraining to a similar 99% replication. For both RPDR and MetaGrad models, we used the same training set which was collected from the control model behavior and followed the same model architecture. Table 2 presents the results of the A/B experiment. For each method we report the percentage of changes in the achieved reward compared to the control model. For violation reduction, we report the percentage of reduction for MetaGrad compared to the RPDR method. For the replication metric, we simply report the percentage of time that each policy makes actions that replicate the control model decision. As we can see from the results, MetaGrad approach not only shows more stable behavior by better constraint satisfaction and replication rates, but it also achieves statistically significant improvements in the reward value. ## 5 Conclusion This paper studied the problem of controlled exploration to control the policy updates in self-learning skill routing systems. We presented a constrained optimization formulation that enables defining the boundary of the desired exploration rate for individual domains. We proposed a scalable and practical solution based on meta-gradient learning which provides the highest constraint satisfaction rates without any extensive hyperparameter adjustment. Finally, we conducted experiments on a real-world conversation system for the skill routing problem. The proposed method was deployed in the production as it showed not only more control over policy changes but also gains in the policy value. ## Limitations While we conducted extensive experiments and demonstrated the effectiveness of the suggested approach for controlled bandit learning in the context of the skill routing problem, there are multiple directions of improvement for future studies. We believe one of the limitations of the suggested constrained optimization framework is that it relies on expert-defined conditions on an arbitrary segmentation of samples. It entails the need for human intervention and manual constraint definition/optimization which can be challenging. Another limitation we faced was during our experiments which showed additional compute overhead of between 2 to 3 times for different constrained optimization methods due to additional optimization objectives, inner loops, and backward passes. ## Ethics Statement The presented work is focused on improving robustness of off-policy bandit updates in conversational systems by introducing robustness constraints on the policy behavior. We do not believe there is any additional risk associated with this work when using the suggested platform on constraints that encourage controlled deviations from a current baseline. Regarding human data handling practices, we ensured anonymity of data samples used in this study and did not reveal any specifics that would violate our internal policies or our customer privacy policies. ## References Sanae Amani, Mahnoosh Alizadeh, and Christos Thrampoulidis. 2019. Linear stochastic bandits under safety constraints. *arXiv preprint arXiv:1908.05814*. Avinash Balakrishnan, Djallel Bouneffouf, Nicholas Mattei, and Francesca Rossi. 2018. Using contextual bandits with behavioral constraints for constrained online movie recommendation. In *IJCAI*, pages 5802–5804. Samuel Daulton, Shaun Singh, Vashist Avadhanula, Drew Dimmery, and Eytan Bakshy. 2019. Thompson sampling for contextual bandit problems with auxiliary safety constraints. arXiv preprint arXiv:1911.00638. Sudipto Guha and Kamesh Munagala. 2007. Multiarmed bandits with limited exploration. In *Proceedings of the Annual Symposium on Theory of Computing (STOC)*. Citeseer. Matthew Hoffman, Bobak Shahriari, and Nando Freitas. 2014. On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In *Artificial Intelligence* and Statistics, pages 365–374. PMLR. Rolf Jagerman, Ilya Markov, and Maarten De Rijke. 2020. Safe exploration for optimizing contextual bandits. ACM Transactions on Information Systems (TOIS), 38(3):1–23. Chi Jin, Praneeth Netrapalli, and Michael I Jordan. 2019. Minmax optimization: Stable limit points of gradient descent ascent are locally optimal. *arXiv preprint* arXiv:1902.00618. Thorsten Joachims, Adith Swaminathan, and Maarten de Rijke. 2018. Deep learning with logged bandit feedback. In International Conference on Learning Representations. Mohammad Kachuee, Jinseok Nam, Sarthak Ahuja, JinMyung Won, and Sungjin Lee. 2022. Scalable and robust self-learning for skill routing in large-scale conversational ai systems. Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Mohammad Kachuee, Hao Yuan, Young-Bum Kim, and Sungjin Lee. 2021. Self-supervised contrastive learning for efficient user satisfaction prediction in conversational agents. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4053–4064. Nikos Karampatziakis, Sebastian Kochman, Jade Huang, Paul Mineiro, Kathy Osborne, and Weizhu Chen. 2019. Lessons from contextual bandit learning in a customer support bot. *arXiv preprint* arXiv:1905.02219. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Han Li, Sunghyun Park, Aswarth Dara, Jinseok Nam, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, and Ruhi Sarikaya. 2021. Neural model robustness for skill routing in large-scale conversational ai systems: A design choice exploration. arXiv preprint arXiv:2103.03373. Romain Lopez, Inderjit S Dhillon, and Michael I Jordan. 2021. Learning from extreme bandit feedback. Proc. Association for the Advancement of Artificial Intelligence. Yatin Nandwani, Abhishek Pathak, Parag Singla, et al. 2019. A primal dual formulation for deep learning with constraints. In Advances in Neural Information Processing Systems, pages 12157–12168. Sunghyun Park, Han Li, Ameen Patel, Sidharth Mudgal, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, and Ruhi Sarikaya. 2020. A scalable framework for learning from implicit user feedback to improve natural language understanding in large-scale conversational ai systems. *arXiv preprint arXiv:2010.12251*. Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. *IEEE Signal Processing* Magazine, 34(1):67–81. Richard S Sutton. 1992. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In *AAAI*, pages 171–176. San Jose, CA. Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudík, John Langford, Damien Jose, and Imed Zitouni. 2016. Off-policy evaluation for slate recommendation. arXiv preprint arXiv:1605.04812. Zhongwen Xu, Hado van Hasselt, and David Silver. 2018. Meta-gradient reinforcement learning. arXiv preprint arXiv:1805.09801. ## A Appendix A.1 Constraint Benchmarks Figure 2 presents the definition of constraint benchmarks used in this paper: global, critical, and explore. The global benchmark sets a general minimum replication rate for all domains. The critical benchmark defines a tighter minimum replication rate for three business-critical domains (home automation, shopping, and notifications) and a more relaxed default case for all other domains. In the explore benchmark, we extend the critical benchmark to include exploration encouragement for the knowledge and music domains. ![8_image_0.png](8_image_0.png) ## A.2 Training Details We train each model until convergence or reaching 32 epochs and take the best performing model based on the macro-averaged violation rate measured on the validation set. Each experiment was run four times using different random seeds for data sampling and weight initialization to report the mean and standard deviation of each result. We used a cluster of 32 NVIDIA V100 GPUs to process a mini-batch size of 32K samples. Each individual run took between 4 to 24 hours. ## A.3 Selected Hyperparameters Table 3 shows the final selected hyperparameters for each benchmark and method. The definition of each hyper-parameter is presented in Algorithm 1 and 2. | Benchmark | | | | | |-------------|--------|----------|---------|------| | Method | global | critical | explore | | | Quadratic | w | 10 | 1000 | 1000 | | Minimax | η | 0.1 | 0.1 | 1 | | γ | 1 | 0.999 | 1 | | | Meta-Grad | λ | 1 | 1 | 1 | Table 3: The selected hyperparameters for each benchmark and method. ## A.4 Impact Of Hyperparameters To study the impact of hyperparameters, we conducted an experiment using the critical benchmark by training minimax and meta-gradient based models using different hyperparameter values. Specifically, we train minimax models (Algorithm 1) using η ∈ {1.0, 0.1, 0.01} and γ ∈ {1.0, 0.999, 0.995}. For the metagradient method (Algorithm 2), we use λ ∈ {0.01, 0.05, 0.1, 0.5, 0.75, 0.95, 1.0}. Figure 3 shows the results of such experiment. Based on this experiment, the minimax approach shows a much higher sensitivity to its two hyperparameters, showing a significant impact on both the reward and violation reduction metrics. However, the metagradient method shows much less sensitivity to the λ hyperparameter. We found that simply setting λ = 1 works very well in practice. It can be very desirable for real-world large-scale settings such as conversational systems which require frequent model updates as new features are on-boarded every day, and having a dependency on an extensive hyperparameter search is very costly, if not impractical. ![9_image_0.png](9_image_0.png) ## A.5 Analysis Of Penalty Weights To dive deeper into the reason behind the better performance for the meta-gradient algorithm compared to the minimax approach, we investigated the constraint penalty weight value for the first 3,000 iterations of training using the global benchmark. From Figure 4, we can see the minimax method is monotonically increasing the penalty weight with each iteration which is a behavior consistent with the gradient ascent update rule in Algorithm 1. In other words, as long as there are any constraint violations, minimax will keep increasing the penalty, which in our opinion is the reason for high sensitivity to the hyperparameters. On the other hand, the meta-gradient approach is using a validation signal to dynamically adjust the penalty weight. Consequently, it may keep the penalty term near zero for an initial phase, rapidly increase it, then decay when violations are reduced and getting a higher reward is preferred. ![9_image_1.png](9_image_1.png)
fusco-etal-2023-pnlp
p{NLP}-Mixer: an Efficient all-{MLP} Architecture for Language
https://aclanthology.org/2023.acl-industry.6
Large pre-trained language models based on transformer architectureƒhave drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is completely impractical due to their size and inference cost. As an alternative to transformer-based architectures, recent work on efficient NLP has shown that weight-efficient models can attain competitive performance for simple tasks, such as slot filling and intent classification, with model sizes in the order of the megabyte. This work introduces the pNLP-Mixer architecture, an embedding-free MLP-Mixer model for on-device NLP that achieves high weight-efficiency thanks to a novel projection layer. We evaluate a pNLP-Mixer model of only one megabyte in size on two multi-lingual semantic parsing datasets, MTOP and multiATIS. Our quantized model achieves 99.4{\%} and 97.8{\%} the performance of mBERT on MTOP and multiATIS, while using 170x less parameters. Our model consistently beats the state-of-the-art of tiny models (pQRNN), which is twice as large, by a margin up to 7.8{\%} on MTOP.
# Pnlp-Mixer: An Efficient All-Mlp Architecture For Language Francesco Fusco IBM Research ffu@zurich.ibm.com Peter Staar IBM Research taa@zurich.ibm.com ## Abstract Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is completely impractical due to their size and inference cost. As an alternative to transformer-based architectures, recent work on efficient NLP has shown that weight-efficient models can attain competitive performance for simple tasks, such as slot filling and intent classification, with model sizes in the order of the *megabyte*. This work introduces the pNLP-Mixer architecture, an embedding-free MLP-Mixer model for on-device NLP that achieves high weightefficiency thanks to a *novel projection layer*. We evaluate a pNLP-Mixer model of only one megabyte in size on two multi-lingual semantic parsing datasets, MTOP and multiATIS. Our quantized model achieves 99.4% and 97.8% the performance of mBERT on MTOP and multiATIS, while using *170x fewer parameters*. Our model consistently beats the state-of-the-art of tiny models (pQRNN), which is twice as large, by a margin up to 7.8% on MTOP. ## 1 Introduction Large language models based on transformer architecture have been fueling the latest successes in natural language processing (NLP). Nowadays, fine-tuning pre-trained models represents the defacto framework to tackle diverse NLP tasks, even those with limited amounts of annotations. While the merit of large pre-trained language models is undeniable, using models of several gigabytes and billions of parameters is not always practical or even possible due to computational and memory requirements. In addition, there are many simple and yet important tasks, such as slot filling Damian Pascual∗ Telepathy Labs damian.pascual@telepathy.ai ## Diego Antognini Ibm Research Diego.Antognini@Ibm.Com in home assistants, which do not require the complex linguistic knowledge encoded by large pretrained models and for which smaller models may reach competitive performance at a much lower cost. Reducing model sizes to the order of the megabyte is a necessity for resource constrained devices, such as smart watches, and in general it is attractive for edge use cases as i) updating models at the edge requires pushing updates to potentially millions of devices, and ii) multiple models solving different tasks can be deployed even on embedded devices with limited memory capacity. Transformer-based architectures are not suitable to downscale to such *ultra small* model sizes, mostly due to the space required to store embedding tables (Zhao et al., 2021). Projection-based models (Ravi, 2017) have shown that the dense representations learned as part of the training process and stored in the embedding tables can be replaced by non-trainable representations computed on-thefly over the text, hence the name embedding-free. In this work, we introduce the *pNLP-Mixer*, a novel embedding-free architecture for ultrasmall NLP models targeting on-device applications. Our architecture relies on a novel *projection layer* which creates text representations for individual tokens by combining the MinHash fingerprints (Broder, 2000) corresponding to each subword unit. The projected features are given as input to a MLP-Mixer (Tolstikhin et al., 2021), which grants our model architecture linear scalability in the sequence length and seamless hardware acceleration. To the best of our knowledge, this is the first work combining subword-unit tokenization and MinHash fingerprints in projection networks. Our evaluation on two semantic parsing datasets representative of on-device applications, MTOP and multiATIS, showcases that the pNLP-Mixer beats the current state-of-the-art for ultra-small models, pQRNN (Kaliamoorthi et al., 2021), by up to 7.8% on sequence tagging tasks. On MTOP, ∗Work done during a research stay at IBM Research. 53 a pNLP-Mixer model with only one million parameters achieves 99.4% of the performance of mBERT, which has *170x* more parameters. ## 2 Related Work Since the introduction of transformer-based language models such as BERT (Devlin et al., 2019), model sizes have been increasing at unprecedented pace (Brown, 2020; Goyal et al., 2021; Lample and Conneau, 2019). Using current large language models for on-device applications is simply not feasible due to the size and computational requirements, especially in resource constrained devices such as smart watches. Transformer-based models optimized for smartphone use cases, such as DistilBERT (Sanh et al., 2019), TinyBERT (Jiao et al., 2020), and MobileBERT (Sun et al., 2020) have shown that by combining knowledge distillation (Hinton et al., 2015) and quantization (Jacob et al., 2018), one can achieve model sizes in the order of tens to hundreds of megabytes in size. Embedded devices, such as wearables, require instead model sizes in the order of the *megabyte*, a target that is very challenging to achieve with transformerbased architecture, mostly because of the size of the embedding tables (Zhao et al., 2021). Embedding-free model architectures have been introduced to completely eliminate the dependency on large embedding tables from models. Instead of learning embeddings at training time, text representation are computed on-the-fly using solely the surface forms of the tokens by means of localitysensitive hashing (LSH) (Charikar, 2002) techniques. This way tokens that are similar at the surface level have similar representations. The idea of replacing trainable parameters stored in embedding tables with LSH-based projections has been introduced in Ravi (2017) and Ravi (2019). Follow up research work on model architectures targeting ultra-small model sizes has resulted in several model architectures including SGNN (Ravi and Kozareva, 2018), SGNN++ (Ravi, 2019), Prado (Kaliamoorthi et al., 2019), and pQRNN (Kaliamoorthi et al., 2021). Our model architecture belongs to the same line of research, but introduces a linguistically informed projection layer which combines subword-unit tokenization (Sennrich et al., 2016) with LSH principles. In our work, we evaluate and compare multiple LSH techniques, including SimHash (Manku et al., 2007) and MinHash (Broder, 2000). In our projec- ![1_image_0.png](1_image_0.png) pNLP-Mixer tion layer, by exploiting the associativity property of MinHash, fingerprints of individual tokens can be efficiently computed from the fingerprints of their subword units. Our model does not use attention mechanisms, as it feeds the representations to a MLP-Mixer (Tolstikhin et al., 2021) model. While using MLP only architectures is not new in the NLP landscape (Liu et al., 2021; Yu et al., 2022), this work is the first proposing an all-MLP architecture for ultra-small models. There are numerous studies around efficient transformer-based models (Tay et al., 2022) and solutions to make them scale linearly with the sequence length. However, none of those work targets models of the size of the single megabyte. ## 3 Pnlp-Mixer: A Projection Mlp-Mixer The pNLP-Mixer has been designed from the ground up as an efficient architecture suitable for both edge cases, memory and latency constrained, and as a backbone for complex NLP pipelines. Figure 1 depicts the model architecture at high level. The pNLP-Mixer falls into the category of projection-based models: instead of storing large embedding tables, like transformer-based models do, our model uses a projection layer which captures morphological knowledge from individual tokens using *non-trainable* hash functions. This projection layer can be seen as a feature extractor that produces a representation from input text. Once the input features are computed, they are passed through a trainable linear layer called bottleneck layer. The output of the bottleneck layer is the input of a series of MLP blocks of a standard MLP-Mixer architecture (Tolstikhin et al., 2021). There are several advantages of using an allMLP architecture for language processing. In contrast to attention-based models, the MLP-Mixer captures long-range dependencies without introducing a quadratic cost on the sequence length. Further, by using only MLPs, the model becomes simple to implement and has out-of-the-box hardware acceleration in devices ranging from mobile phones to server-grade inferencing accelerators. The main contribution of our work is to show that a simple model like the MLP-Mixer represents a valid alternative to transformer-based models in NLP, even in setups where large embedding tables are replaced with projections computed on the fly. The key to achieve competitive performance with such small and computationally efficient models is to feed them with high-quality input features. ## 3.1 Projection Layer Our projection layer builds upon the notion of locality sensitive hashing (LSH) (Indyk and Motwani, 1998) to create representations from text. While LSH has been introduced in previous works, e.g., in pQRNN (Kaliamoorthi et al., 2021), our approach is completely novel. In particular, we combine subword-unit tokenization (Schuster and Nakajima, 2012; Sennrich et al., 2016) and the *associativity* of MinHash (Broder, 2000) to efficiently compute features of any token as a combination of the features corresponding to its subword unit. Subword tokenization, which is commonly used in transformers, ensures that any text can be represented as a sequence of subwords units, i.e., there are no out-of-vocabulary words. In our context, using subword tokenization provides two main advantages: i) linguistic knowledge can be injected by training domain-specific subword-unit tokenizers, and ii) the representation of each subword unit can be precomputed and cached to reduce inference costs. Our projection layer calculates the MinHash fingerprint F t of each input token t by reusing the fingerprint of individual subword units belonging to the vocabulary V (see Figure 2). A fingerprint F ∈ N nis an array of n positive integers F0 to Fn−1, computed with n distinct hash functions h0(x) to hn−1(x) mapping strings to positive integers. This way, the first step of our projection is tokenization, which transforms each input token into a list of subword units. Then, for each subword unit u, we calculate its fingerprint F u. Each element F u i , with 0 ≤ *i < n*, is obtained by first applying a hash function hi(x) to each of the trigrams v0 to vk−1 extracted from the subword u, with k ≥ 1. Then, F u iis ob- ![2_image_0.png](2_image_0.png) tained as the minimum hash value across trigrams: F u i = min(hi(v0)*, ..., h*i(vk−1)). For example, for the subword unit "*Bring*", F Bring iis computed as F Bring i=min(hi("Bri"), hi("rin"),hi("ing")). When a subword is a continuation, e.g., "\#\#ing", we skip the trigram extraction and calculate the hash hi(x) directly on the full subword unit u. The fingerprint F uis built by calculating F u ifor each of the n hash functions h0(x) to hn−1(x). Finally, the fingerprint F t of a token t made of several subword units u0 to uj−1, e.g., "*Bringing*" → ["*Bring*", "\#\#ing"], is simply obtained by setting each element F t i to the minimum across the j subword fingerprints F u0 ito F uj−1 i. In our example, F Bringing i = min(F Bring i, F \#\#ing i). In practice, if the fingerprint of each subword unit u in the vocabulary V is precomputed and cached, inference does not require any hashing on strings but only computing the *minimum* between integer sets. In our setup, we use the minimum operator as described in the original MinHash paper (Broder, 2000), which also contains the required theoretical foundations. What we introduce in our work is a method that elegantly exploits the associativity property of MinHash to avoid computing hash functions over strings at runtime. For each input token t, we do not use the fingerprints directly as input to the bottleneck layer but, instead, we use them to populate an array C t ∈ R m of m float counters initially set to zero. In detail, given the fingerprint F tcorresponding to the token t, for each of the n MinHash values F t i , we increase by one the value in position p of the array of counters C t, where p = F t i *mod m*. Therefore, the extracted feature from each token is essentially a Counting Bloom Filter (Fan et al., 2000) storing the set of integers part of its MinHash fingerprint. Caching fingerprints for subword-units is entirely *optional*. The memory required to enable caching is given by an integer matrix storing |V | fingerprints of size n, where |V | is the vocabulary size and n the number of hash functions. In practice, caching costs just a few megabytes of memory with vocabularies from pre-trained models and fingerprints with, e.g., 64 hashes. Note, that there is a fundamental difference between the embedding matrices stored in transformers and our cached fingerprints. In contrast to embedding tables, which in transformer-based models are task specific as a result of the fine-tuning process, the fingerprints are *not trainable* and they are not directly involved in any matrix multiplication. Since fingerprints are not trainable, they can be reused across different models, i.e., a *single* cache can serve n models. In complex NLP pipelines to be executed on embedded devices at the edge, this architecture provides substantial opportunities for optimizations. First, the same token fingerprint can be reused to perform the inference with distinct models, which means that the cost of computing the projection can be easily amortized. Second, as long as tokenizer and hashing scheme do not change, distinct models can be independently updated, while keeping the cache of subword-unit fingerprints unmodified on the device. Third, having a cache that is shared among models means that the memory costs required to enable caching are also amortized. It is worth to remark that the advantages offered by our architecture are not limited to edge use-cases. Large-scale natural language processing platforms running in data-centers can equally benefit from the resource optimization and granular deployment opportunities offered by our architecture. ## 3.2 Mlp-Mixer The MLP-Mixer (Tolstikhin et al., 2021) is a simple architecture that consists exclusively of mixer blocks. Each block has two multi-layer perceptrons (MLPs) interleaved by a transposition operation. The transposition of the output of the first MLP lets the second operate on the sequence dimension, effectively mixing information across tokens. Our model follows the original work. In our case, the matrix C ∈ R s×m produced by the projection layer, where s the sequence length and m the size of the counting bloom filter, is passed through a bottleneck layer: a dense layer followed by an activation function and a normalization layer, that outputs a matrix B ∈ R s×h, where h is the hidden size. B is fed to the MLP-Mixer, which in turn produces an output O ∈ R s×h. We apply a classification head on top of O to generate the predictions. In the case of semantic parsing this head is a linear layer applied on each token, while for classification tasks, we use a max pooling instead. ## 4 Experimental Setup Our architecture is designed as an alternative to transformer-based models for *ultra-small* models (i.e., one megabyte) targeting on-device applications. In the league of extremely small models, common evaluation datasets used by research and industry are not the same as the ones used for evaluating the generalizability of large pre-trained language models (e.g., GLUE (Wang et al., 2018)), but datasets for simpler tasks, that are more realistic applications for tiny models. Thus, we align to prior works on tiny models for on-device applications (Kaliamoorthi et al., 2019, 2021) that assess models on two multilingual semantic parsing datasets. MTOP (Li et al., 2021). It covers six languages, English, Spanish, French, German, Hindi, and Thai. It was created by translating from English to the other languages. The train, dev, and test set for each language contain 10k, 1.5k, and 3k samples. We assess the models on slot parsing (*named entity* recognition) on 78 different slot labels. We report the exact match accuracy score, computed as the number of instances whose all tokens have been correctly labeled over the number of instances. multiATIS (Xu et al., 2020). It is a multilingual version of the ATIS dataset (Price, 1990), that contains queries related to air travel in nine languages: English, Spanish, French, German, Hindi, Japanese, Portuguese, Turkish, and Chinese. Each language except Hindi and Turkish consists of 4, 488/490/893 samples for train, dev, and test sets; for Hindi and Turkish the splits are 1, 440/160/893 and 578/60/715. We evaluate on intent classification: determining the intent of a query from 18 labels, and we report the accuracy. Training Details. In our experiments, we aim for model sizes in the order of *one million parameters* (one megabyte with 8-bit quantization). All trained models are approximately of this size. For the pNLP-Mixer, the projection of each token is a feature vector of dimension 512 filled with 256 hashes. The bottleneck consists of one MLP | MTOP EN | multiATIS EN | | |----------------|------------------|-------------| | Projection | Exact Match Acc. | Intent Acc. | | Binary | 80.58 | 97.97 | | TSP | 80.33 | 98.17 | | SimHash | 80.99 | 98.38 | | MinHash (Ours) | 82.51 | 98.57 | with a Leaky ReLU as the activation function (Xu et al., 2015) followed by a normalization layer (Ba et al., 2016). Finally, we use 5 Mixer layers where each block contains 256 hidden dimensions for the token-mixing, channel-mixing, and classification head. We use the tokenizer of BERT-base multilingual cased. We tune the learning rate, weight decay, and dropout with a batch size of 128 and using early-stopping with a patience of 5 epochs. We select the models reaching the best exact match and intent accuracy on the validation set. We report their performance on the test set. ## 5 Model Investigation We provide detailed insights on the impact of different projection layers and other architectural components as well as a comparison to alternative architectures. We perform the experiments on the English variant of the MTOP and multiATIS datasets. ## 5.1 Projection Comparison First, given the pNLP-Mixer model of Section 4 with input features fixed to 512, we compare different feature extraction strategies. Specifically: - **Binary.** We compute 256 hash values for each token. Given a token and a bitmap of size m = 512 set to zero, for each hash value hv, we set to 1 the bit in position p = hv *mod m* of the bitmap. The token feature is a float tensor storing the bitmap. - **TSP.** For each token a 1024-bits hash is computed and then represented as ternary feature of size 512 as described in Kaliamoorthi et al. (2019). ## - **Minhash.** Our Projection Layer (Section 3.1). - **SimHash.** We compute the hashes of subword units as in *MinHash*, but we combine them using SimHash (Manku et al., 2007; Shrivastava and Li, 2014). The extracted feature is a binary feature of size l, where l is the size (in bits) of the hashes applied to n-grams or entire subword units. The value at index 0 ≤ *p < l* of the feature is the sign | MTOP EN | multiATIS EN | | | |-----------------|----------------|------------------|-------------| | Model | # Param. | Exact Match Acc. | Intent Acc. | | Projection-only | 0.2M | 49.15 | 81.54 | | CNN | 1.0M | 73.74 | 97.77 | | LSTM | 1.2M | 76.92 | 97.77 | | Transformer | 1.0M | 74.05 | 97.97 | | MLP-Mixer | 1.0M | 82.51 | 98.57 | of the value ϕp stored at index p of a histogram ϕ of length l. The histogram, initialized to 0, is populated by summing or subtracting 1 to ϕp whenever a hash value has a 1 or 0 in position p. In Table 1, we report the best scores obtained after tuning each configuration. Overall, our projection layer MinHash obtains the best exact match accuracy and intent accuracy, with an absolute improvement over SimHash of +1.52 and +0.19. Binary and TSP obtain the worst performance: −1.93 and −2.18 on the MTOP compared to MinHash, and −0.60 and −0.4 on multiATIS. Those differences confirm the limitation of binary and ternary features and highlight the importance of carefully designing the projection layer and justifies an effort for further research on projection algorithms. Given these results, we only consider our MinHashbased projection for the rest of the experiments. ## 5.2 Model Comparison Now, we investigate whether the MLP-Mixer is the optimal architecture to process this representation. First, we remove the MLP-Mixer and connect the output of the bottleneck layer to the classification heads (Projection-Only). Then, we replace the MLP-Mixer with three alternative architectures: a convolutional neural network (CNN) (LeCun et al., 2015), a long short-term memory recurrent neural network (LSTM) (Hochreiter and Schmidhuber, 1997), and a transformer (Vaswani et al., 2017). Table 2 shows that using the projection-layer directly as input to the classification heads without a model in between, results in very poor performance. From the alternative models, all perform significantly worse than the MLP-Mixer: −8.77, −5.59, and −8.46 in terms of exact match accuracy for the CNN, LSTM, and transformer models, respectively. This last result is remarkable: for the same number of parameters, the MLP-Mixer outperforms the transformer while having a linear complexity on the input length instead of a quadratic one. Overall, | Intent Accuracy | | | | | | | | | | | | |-------------------|----------|------|------|------|------|------|------|------|------|------|------| | Model | # Param. | EN | ES | FR | DE | HI | JA | PT | TR | ZH | Avg | | LSTM | 28M | 96.1 | 93.0 | 94.7 | 94.0 | 84.5 | 91.2 | 92.7 | 81.1 | 92.5 | 91.1 | | mBERT | 170M | 98.3 | 97.4 | 98.6 | 98.5 | 94.5 | 98.6 | 97.4 | 91.2 | 97.5 | 96.9 | | Transformer | 2M | 96.8 | 92.1 | 93.1 | 93.2 | 79.6 | 90.7 | 92.1 | 78.3 | 88.1 | 89.3 | | pQRNN | 2M(8bit) | 98.0 | 97.0 | 97.9 | 96.6 | 90.7 | 88.7 | 97.2 | 86.2 | 93.5 | 94.0 | | pNLP-Mixer | 1M(8bit) | 98.1 | 97.1 | 98.1 | 97.3 | 90.7 | 92.3 | 97.2 | 87.3 | 95.1 | 94.8 | | Exact Match Accuracy | | | | | | | | |---------------------------------------------------|----------------------------------------|-------------------------------|------|----|----|----|-----| | Model | #Param. EN | ES | FR | DE | HI | TH | Avg | | XLU | 70M | 78.2 70.8 68.9 65.1 62.6 68.0 | 68.9 | | | | | | XLM-R | 550M | 85.3 81.6 79.4 76.9 76.8 73.8 | 79.0 | | | | | | mBERT | 170M | 84.4 81.8 79.7 76.5 73.8 72.0 | 78.0 | | | | | | Transformer 2M | 71.7 68.2 65.1 64.1 59.1 48.4 | 62.8 | | | | | | | pQRNN | 2M(8bit) 78.8 75.1 71.9 68.2 69.3 68.4 | 71.9 | | | | | | | - distilled | 79.4 75.4 73.0 68.6 70.2 69.5 | 72.7 | | | | | | | pNLP-Mixer 1M(8bit) 84.0 78.3 75.2 76.9 76.5 74.1 | 77.5 | | | | | | | the evaluation shows that the MLP-Mixer is weightefficient for processing the projection output and reaching high performance. ## 6 Evaluation Finally, we run a complete evaluation on the test sets of MTOP and multiATIS. We compare our pNLP-Mixer with three very large models: XLU (Lai et al., 2019), which is a bi-LSTM model with pretrained XLU embeddings, and two pretrained multilingual models: XLM-R (Conneau et al., 2020) and multilingual BERT (mBERT) (Devlin et al., 2019). We also include two small models: pQRNN (Kaliamoorthi et al., 2021) and a simple transformer using the same projection as pQRNN. pQRNN is an embedding-free QuasiRNN (Bradbury et al., 2017) model that shares the same philosophy of our proposed pNLP-Mixer: a small and task-specific model that learns directly from the text. For a fair comparison against pQRNN, we quantize our pNLP-Mixer models and report the performance on the 8-bit version. Finally, we include pQRNN distilled with mBERT on MTOP (the original study did not distill pQRNN on multiATIS). The performance values of all the baselines are taken from Kaliamoorthi et al. (2021). MTOP. Table 4 shows that the large pre-trained models, XLM-R and mBERT, obtain the highest scores. Notably, from the smaller alternatives, our pNLP-Mixer with only 1M parameters, 8-bit quantization and no pretraining, i.e., 680x smaller than mBERT, reaches an average exact match accuracy only 0.5 and 1.5 points lower than mBERT and XLM-R. It even beats mBERT in the non-European languages. With those results, the pNLP-Mixer beats a twice larger pQRNN model across all languages by 7.8% in average. It even beats a pQRNN model distilled from mBERT by 6.6% in average. multiATIS. Table 3 shows a similar trend compared to the MTOP dataset. On average, the pNLPMixer performs better than pQRNN while being twice as small. Remarkably, the pNLP-Mixer significantly outperforms the transformer model and the larger LSTM. Moreover, it reaches 97.8% of the performance of mBERT while being 680x smaller. Discussion. The results show that the pNLPMixer represents a very competitive model for the settings where the maximum model size is limited due to either memory or latency requirements. Our pNLP-Mixer models, with only 1M parameters and a size of one megabyte when quantized, reaches competitive scores in both datasets compared to mBERT, which is a 680x larger model. This represents an important step towards ultra-small models for NLP. To put numbers in perspective, for the non-quantized pNLPN-Mixer model, the inference latency with batch size 1 on a *single CPU core* is as little as 2.4ms,1 with the projection layer taking 0.4ms. Finally, we could not compare pNLP-Mixer with pQRNN in terms of FLOPS or latency because the authors did not make the code available; we are unable to produce comparable predictive performance with our implementation of pQRNN. ## 7 Conclusion We introduce pNLP-Mixer, the first embeddingfree model based on the MLP-Mixer architecture. Our main contribution is an efficient and yet effective projection layer which combines MinHash fingerprints and subword-unit tokenization to create rich token representations. Our evaluation shows that the pNLP-Mixer beats the state-of-the-art of tiny NLP models, pQRRN, and offers sequence tagging performances that are up to 7.8% higher while using *half* of the parameters. The results are remarkable: a pNLP-Mixer model of only 1 million parameters provides a performance of 99.4% and 97.8% on MTOP and multiATIS, respectively, compared to mBERT which is a a pretrained model with *170x* more parameters. Our pNLP-Mixer model is simple to implement and accelerate, and provides competitive performance even without pre-training or distillation. Our work demonstrates the importance of projection methods and embedding-free architectures to advance the field of ultra-small models. ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Andrei Z. Broder. 2000. Identifying and filtering nearduplicate documents. In In COM '00: Proceedings of the 11th Annual Symposium on Combinatorial Pattern Matching, pages 1–10. Springer-Verlag. Tom et al Brown. 2020. Language models are fewshot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Moses S Charikar. 2002. Similarity estimation techniques from rounding algorithms. In *Proceedings of* the thiry-fourth annual ACM symposium on Theory of computing, pages 380–388. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 8440– 8451, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Fan, Pei Cao, Jussara Almeida, and Andrei Z Broder. 2000. Summary cache: a scalable wide-area web cache sharing protocol. IEEE/ACM transactions on networking, 8(3):281–293. Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021. Larger-scale transformers for multilingual masked language modeling. In *Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)*, pages 29–33, Online. Association for Computational Linguistics. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards removing the curse of dimensionality. In *Proceedings of the thirtieth annual ACM symposium on Theory of computing*, pages 604–613. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In *Proceedings of* the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163– 4174, Online. Association for Computational Linguistics. Prabhu Kaliamoorthi, Sujith Ravi, and Zornitsa Kozareva. 2019. PRADO: Projection attention networks for document classification on-device. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5012– 5021, Hong Kong, China. Association for Computational Linguistics. Prabhu Kaliamoorthi, Aditya Siddhant, Edward Li, and Melvin Johnson. 2021. Distilling large language models into tiny and effective students using pqrnn. CoRR, abs/2101.08890. Guokun Lai, Barlas Oguz, Yiming Yang, and Veselin Stoyanov. 2019. Bridging the domain gap in cross-lingual document classification. *CoRR*, abs/1909.07009. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS). Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. *nature*, 521(7553):436. Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. Mtop: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2950–2962. Hanxiao Liu, Zihang Dai, David So, and Quoc V Le. 2021. Pay attention to mlps. In *Advances in Neural* Information Processing Systems, volume 34, pages 9204–9215. Curran Associates, Inc. Gurmeet Singh Manku, Arvind Jain, and Anish Das Sarma. 2007. Detecting near-duplicates for web crawling. In WWW '07: Proceedings of the 16th international conference on World Wide Web, pages 141–150. Patti Price. 1990. Evaluation of spoken language systems: The atis domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Sujith Ravi. 2017. Projectionnet: Learning efficient on-device deep networks using neural projections. CoRR, abs/1708.00630. Sujith Ravi. 2019. Efficient on-device models using neural projections. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 5370–5379. PMLR. Sujith Ravi and Zornitsa Kozareva. 2018. Selfgoverning neural networks for on-device short text classification. In *Proceedings of the 2018 Conference* on Empirical Methods in Natural Language Processing, pages 887–893, Brussels, Belgium. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In *International Conference* on Acoustics, Speech and Signal Processing, pages 5149–5152. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Anshumali Shrivastava and Ping Li. 2014. In defense of minhash over simhash. In *AISTATS*, volume 33 of JMLR Workshop and Conference Proceedings, pages 886–894. JMLR.org. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 2158–2170, Online. Association for Computational Linguistics. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2022. Efficient transformers: A survey. ACM Comput. Surv., 55(6). Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy. 2021. Mlp-mixer: An all-mlp architecture for vision. *arXiv preprint arXiv:2105.01601*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In *Proceedings of the* 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Association for Computational Linguistics. Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. 2015. Empirical evaluation of rectified activations in convolutional network. *arXiv preprint arXiv:1505.00853*. Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for crosslingual nlu. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 5052–5063. Ping Yu, Mikel Artetxe, Myle Ott, Sam Shleifer, Hongyu Gong, Ves Stoyanov, and Xian Li. 2022. Efficient language modeling with sparse all-mlp. Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2021. Extremely small BERT models from mixed-vocabulary training. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2753–2759, Online. Association for Computational Linguistics.
fusco-antognini-2023-extracting
Extracting Text Representations for Terms and Phrases in Technical Domains
https://aclanthology.org/2023.acl-industry.7
Extracting dense representations for terms and phrases is a task of great importance for knowledge discovery platforms targeting highly-technical fields. Dense representations are used as features for downstream components and have multiple applications ranging from ranking results in search to summarization. Common approaches to create dense representations include training domain-specific embeddings with self-supervised setups or using sentence encoder models trained over similarity tasks. In contrast to static embeddings, sentence encoders do not suffer from the out-of-vocabulary (OOV) problem, but impose significant computational costs. In this paper, we propose a fully unsupervised approach to text encoding that consists of training small character-based models with the objective of reconstructing large pre-trained embedding matrices. Models trained with this approach can not only match the quality of sentence encoders in technical domains, but are 5 times smaller and up to 10 times faster, even on high-end GPUs.
# Extracting Text Representations For Terms And Phrases In Technical Domains Francesco Fusco∗ IBM Research ffu@zurich.ibm.com ## Abstract Extracting dense representations for terms and phrases is a task of great importance for knowledge discovery platforms targeting highlytechnical fields. Dense representations are used as features for downstream components and have multiple applications ranging from ranking results in search to summarization. Common approaches to create dense representations include training domain-specific embeddings with self-supervised setups or using sentence encoder models trained over similarity tasks. In contrast to static embeddings, sentence encoders do not suffer from the out-ofvocabulary (OOV) problem, but impose significant computational costs. In this paper, we propose a fully *unsupervised approach* to text encoding that consists of training small character-based models with the objective of *reconstructing* large pre-trained embedding matrices. Models trained with this approach can not only match the quality of sentence encoders in technical domains, but are *5 times* smaller and up to *10 times* faster, even on high-end GPUs. ## 1 Introduction Large pre-trained language models are extensively used in modern NLP systems. While the most typical application of language models is fine-tuning to specific downstream tasks, language models are often used as text encoders to create dense features consumed by downstream components. Among the many use cases of dense text representations there is search, question answering, and classification (Yang et al., 2020). Static embeddings, trained with algorithms such as Word2Vec (Mikolov et al., 2013), can exploit existing information extraction pipelines to create representations for entities, phrases, and terms present in text corpora. Static embedding matrices are trained with self-supervised approaches at regular intervals, either when additional data is avail- *Equal contribution. Diego Antognini∗ IBM Research Diego.Antognini@ibm.com able or to leverage improvements in information extraction models. Pre-trained embedding matrices can be considered as static feature stores, providing dense representations for entries belonging to a fixed vocabulary. Representations for entries outside of the vocabulary are not available, leading to the out-of-vocabulary (OOV) problem. In contrast, contextualized word embeddings leverage sentence encoders (Cer et al., 2018; Reimers and Gurevych, 2019) to dynamically create dense representations for any input text by performing a forward pass over a large language model. Specifically, a word embedding is computed at inference time based on its context, unlike static word embeddings that have a fixed (contextindependent) representation. In practice, sentence encoders solve the out-of-vocabulary (OOV) problem which affects static embeddings at the cost of high computational requirements and stronger dependencies on supervised datasets for similarity. Despite the popularity of sentence encoders, large pre-trained embedding matrices are still widely adopted in the industry to encode not only individual tokens but also multi-token entities extracted with in-house NLP pipelines. Once those embedding matrices are trained, the text representation for single- and multi-token entries encountered at *training time* can be looked up in constant time. In this paper, we describe an effective approach taken to provide high-quality textual representations for terms and phrases in a commercially available platform targeting highly-technical domains. Our contribution is a novel *unsupervised approach* to train text encoders that bridges the gap between large pre-trained embedding matrices and computationally expensive sentence encoders. In a nutshell, we exploit the vast knowledge encoded in large embedding matrices to train small character-based models with the objective of *reconstructing* them, i.e., we use large embedding matrices trained with self-supervision as large *training datasets* mapping 61 text to the embedding vectors. Our approach is extremely attractive for industrial setups as it leverages continuous improvements, testing, and inference costs of existing information extraction pipelines to create large datasets to train text encoders. This way, the return on investment for annotations, development, and infrastructure costs are maximized. In our evaluation, we highlight that by combining unsupervised term extraction annotators and static embeddings we can train lightweight character-based models that *match* the quality of supervised sentence encoders and provide *substantially better* representations than sentence encoders trained without supervision. Our models not only provide competitive representations, but are up to 5 times smaller and *10 times* faster than sentence encoders based on large language models. ## 2 Existing Approaches The mission of our industrial knowledge discovery platform is to extract knowledge from large corpora containing highly-technical documents, such as patents and papers, from diverse fields ranging from chemistry, to physics, to computer science. Information extraction is extremely challenging given the large variety of language nuances and the large cost of human annotations in such specialized domains. Therefore, it is of extreme importance to minimize the dependencies on annotated data and to use unsupervised approaches whenever possible. A recurring requirement from many internal components of our platform is the ability to extract high-quality dense representations for technical terms, entities, or phrases which can be encountered in many distinct technical fields. High-quality representations are extremely valuable to implement semantic search, to influence the ranking, or to be used directly as model features. In modern industrial systems, it is often the case that static and context-dependent embedding technologies coexist on the same platform to extract representations. While static embeddings are trained in a self-supervised fashion, sentence encoders are often built by fine-tuning pre-trained models on similarity tasks using annotated datasets. Having two separate approaches for text encoding is suboptimal as those systems are *completely independent* and embed terms into *distinct embedding spaces*. To reconcile those two worlds, we propose an approach where static embeddings and text encoders are mapping text into the *same embedding* space. Our intuition is that static embedding matrices storing embeddings for single tokens, but also multi-token terms, entities, or phrases, represent an invaluable source of information to train text encoders. While those matrices are built with selfsupervision, they can leverage existing annotators, supervised or not, which are commonly available in large text processing platforms. Our novel approach consists of using pre-trained embedding matrices as a training set, and training character-based models, called CharEmb, to predict the embedding vector for a text. This means that the character-based models will enable to project any sequence of characters in the same embedding space as the pre-trained embedding matrices. ## 2.1 Static Embeddings Algorithms to train static word embeddings, such as Word2Vec (Mikolov et al., 2013), have been introduced to efficiently compute the representations for words or entire phrases extracted from large text corpora. To create the representation of entire phrases, the original Word2Vec paper suggested to simply preprocess the text to make sure that phrases are treated as distinct words in the vocabulary. In a nutshell, the preprocessing step involves merging the tokens of which a phrase is composed into a unit which is not split by the tokenizer, e.g., ["machine", *"learning"*] becomes [*"machine_learning"*]. In a typical industrial setup, this approach can be naturally generalized to leverage the large set of annotators that are commonly available in largescale natural language processing platforms. This way one can create domain-specific embeddings not just for single tokens, but for entities, terms, phrases which are extracted in a natural language processing platform. Combining self-supervised word embedding algorithms together with existing sequence tagging models is extremely attractive. First, one can fully leverage the constant enhancements of in-house models to improve the quality of the embeddings for all the entities of interests. Second, since the sequence tagging models are built in-house and independently evaluated, using them to build embedding matrices means reducing the time spent in quality assurance (QA). Third, since the model inference is anyway computed over large amount of textual data while the natural language processing platform is operating, one can amortize ![2_image_0.png](2_image_0.png) that compute costs to accelerate another task, i.e., providing high-quality text representation. ## 2.2 Sentence Embeddings Static embedding matrices built with the previous approach can provide representations for several hundred millions entries when trained over large text corpora pre-processed with several name entity recognition (NER) models. Despite the large size, one can still experience the problem of outof-vocabulary (OOV), which means, downstream components might require text representations for text entries which are not present in the vocabulary. Text encoders have been introduced to solve the OOV problem. They provide ways to create embeddings that are not static but contextualized, which means that the embedding vector must be created on the fly via a model inference. Contextualized embeddings can be created using pre-trained models trained with self-supervised setups such as BERT (Devlin et al., 2019) or with text encoders which are still based on large pre-trained models, but fine-tuned with task similarity datasets. Sentence encoders trained with supervised setups using for example the NLI datasets (Bowman et al., 2015; Williams et al., 2018), such as S-BERT (Reimers and Gurevych, 2019) are well known to perform well in practice to create representations for entire sentences or features for finer grained text snippets. The limitation of supervised approaches for sentence encoding is that creating large annotated datasets for similarity is extremely expensive, especially in technical fields. Therefore, improving the sentence encoder themself requires substantial investments in annotations. Unsupervised sentence encoder approaches (Gao et al., 2021; Wang et al., 2021), on the other hand are well known to offer ## 3 Our Model: Charemb Instead of having two completely disjoint systems to create text representations, we use characterbased models trained over large static embedding matrices, that project a sequence of text into the same embedding space as the embedding matrices used as training data. In practice, *we approach* text encoding as a compression problem, where character-based models are trained to reconstruct the pre-trained static embedding matrices, as shown in Figure 1. This training approach can rely on supervised sequence tagging models or can be implemented using fully unsupervised methods, such as the term extraction technologies described in the literature (Fusco et al., 2022). As we highlight in the evaluation section, a text encoder trained without any supervision can match the performance of supervised sentence encoders in creating representations for words or phrases in technical domains. To train our models, we consider a static pretrained embedding matrix as gold dataset. An individual training sample associates a single- or multitoken text to an embedding vector. To leverage the dataset we train a text encoder model to minimize the cosine similarity between the produced vector and the original vector stored in the static embedding matrix. The models rely on character-level tokenization to generalize better on unseen inputs in technical domains. Figure 2 highlights a simple yet effective LSTM-based model architecture. The pre-trained static embedding matrix (on the right) contains |V | embedding vectors of size k, where V is the vocabulary of the static embedding matrix. The model receives as input the text t, tokenize it ![3_image_0.png](3_image_0.png) in characters, for which an embedding matrix is trained. Character-level embeddings are used as input for a Long Short-Term Memory sequence model. The last state of the LSTM layer is used to produce, via a Multi-Layer Perceptron, a vector of dimension k that represents the embedding for the text t. The network is trained to minimize the cosine distance between the predicted embedding and the original one stored in the embedding matrix. The number of distinct training samples is |V |, which means that embeddings with large vocabularies correspond to bigger training datasets. ## 4 Evaluation In this section, we compare our text representations using the *Patent Phrase Similarity Dataset* built by Google (Aslanyan and Wetherbee, 2022). Given two multiword technical concepts (called *anchor* and *target*), the task consists of predicting a similarity score between the two terms. Unlike generalpurpose sentence datasets, such as STS-B (Cer et al., 2017) or SICK (Marelli et al., 2014), we focus on technical concepts in the patent and scientific domains. The dataset contains 38, 771 unique concepts and 36, 473, 2, 843, and 9, 232 concept pairs with humanly-annotated similarity scores for the train, validation, and test sets, respectively. In our case, we are only interested in the *zeroshot* scenario, and thus, we *only* consider the test set and ignore the train and validation sets. We evaluate the quality of the text representations using the same approach described in Aslanyan and Wetherbee (2022): we compute the Pearson and Spearman correlation between the cosine similarity of the two embeddings and the human-annotated scores.1 ## 4.1 Static Pre-Trained Word Embeddings First, we compare the performance of publiclyavailable pre-trained embedding matrices with embeddings trained with in-domain data. Following the approach described in Aslanyan and Wetherbee (2022), we compute the representation for concepts consisting of multiple tokens as the average of the embedding vectors of each unigram. To highlight the importance of in domain-data, we train static embedding matrices using a relatively small corpus consisting of 120 million sentences sampled from the the ArXiv (Clement et al., 2019) and the HUPD (Suzgun et al., 2022) datasets. To train our embeddings, we pre-process the text after running term-extraction using the unsupervised method described in Fusco et al. (2022). This way, our method can be considered fully unsupervised, and its evaluation does not depend on proprietary annotations and model architectures. The size of the text after pre-processing is 18 Gigabytes accounting for 1.9 billion tokens with termextraction enabled and 2.2 billion tokens without. We train the static embeddings using CBOW (Mikolov et al., 2013). Training for one epoch takes 25 minutes on a 16-core *AMD EPYC 7742*, which corresponds to less than *10 dollars* of compute costs with current cloud offerings. We do not consider the term extraction as part of the overall training costs since in practice, the large amount of annotations that a large-scale NLP platform produces during its execution can be entirely reused. Finally, we train three variants. The first contains unigrams and multiword expressions extracted with our term extractor represented with one token (i.e., "machine learning" → "machine_learning"). The second considers only unigrams (i.e., with the termextraction component disabled). For the third we use FastText (Bojanowski et al., 2017) instead. We compare our embedding matrices with the official pre-trained models for GloVe (Pennington et al., 2014), Word2Vec (Bojanowski et al., 2017), and FastText (Bojanowski et al., 2017). Those are trained on corpora that are substantially larger than our ArXiv-HUPD dataset (up to *300 times*). Table 1 reports the Pearson and Spearman correlations when using the representations of the static word embedding matrices. Not surprisingly the embeddings trained over the ArXiv-HUPD corpus, which contains text of the same domain, provide substantially better results than embeddings pretrained over corpora that are out-of-domain, such | Correlation | Correlation | | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|-----------|-----------|-------|--------|-----------|------|-------|--------| | Pre-trained embedding | |Voc.| Size (MB) Dim. Pear. | Spear. | | | | | | | | | GloVe (6B) | 0.4M | 458 | 300 42.37 | 43.95 | | | | | | | GloVe (42B) | 1.9M | 2,194 | 300 40.30 | 45.83 | | | | | | | GloVe (840B) | 2.2M | 2,513 | 300 44.83 | 49.71 | | | | | | | FastText wiki-news (16B) 1.0M | 1,144 | 300 39.01 | 46.03 | | | | | | | | FastText crawl (600B) | 2.0M | 2,289 | 300 47.36 | 49.32 | | | | | | | Word2Vec news (100B) | 3.0M | 2,861 | 250 44.04 | 44.72 | | | | | | | ArXiv-HUPD (Ours) - uni (FastText) (2.2B) | 5.3M | 6,006 | 300 51.25 | 49.92 | | | | | | | - uni (Word2Vec) (2.2B) | 1.8M | 1,403 | 200 50.82 | 52.97 | | | | | | | - uni + terms (1.8B) | 5.2M | 3,984 | 200 51.62 | 53.91 | Models | Size (MB) | Dim. | Pear. | Spear. | | BERT (Unsup.) | 1,344 | 1,024 | 43.07 | 41.40 | | | | | | | Patent-BERT (Unsup.) | 1,344 | 1,024 | 54.00 | 54.47 | | | | | | | SimCSE (Unsup.) | 438 | 768 | 53.35 | 51.91 | | | | | | | Patent-SimCSE (Unsup.) | 438 | 768 | 50.51 | 48.33 | | | | | | | Sentence-BERT (Sup.) | 438 | 768 | 59.82 | 57.66 | | | | | | | SimCSE (Sup.) | 438 | 768 | 56.63 | 56.81 | | | | | | | ArXiv-HUPD - unigrams (Word2Vec) (Ours) | | | | | | | | | | | CharEmb Small (Unsup.) | 13 | 41.68 | 39.55 | | | | | | | | CharEmb Base (Unsup.) | 38 | 47.57 | 46.16 | | | | | | | | 200 | | | | | | | | | | | CharEmb Large (Unsup.) | 86 | 47.58 | 45.60 | | | | | | | | ArXiv-HUPD - unigrams + terms (Ours) | | | | | | | | | | | CharEmb Small (Unsup.) | 13 | 55.53 | 56.73 | | | | | | | | CharEmb Base (Unsup.) | 38 | 58.53 | 59.66 | | | | | | | | 200 | | | | | | | | | | | CharEmb Large (Unsup.) | 86 | 59.84 | 60.52 | | | | | | | | Table 1: Static context-independent word embeddings. Brackets denote the number of token of the corpus. Training static embeddings on ArXiv-HUPD improves | | | | | | | | | | Table 1: **Static** context-independent word embeddings. Brackets denote the number of token of the corpus. Training static embeddings on ArXiv-HUPD improves significantly the correlation with the human annotations. | Pearson Correlation | | | | |-----------------------|--------------------|-------|-------------| | Models | Original Reconstr. | ∆ | Compression | | CharEmb Small 13MB | 49.70 | −3.7% | 306x | | CharEmb Base 38MB | 54.33 | +5.3% | 236x | | 51.62 | | | | | CharEmb Large 86MB | 55.97 | +8.4% | 46x | Table 2: **Reconstructed static** word embeddings. We report the correlation of the original ArXiv-HUPD embeddings (uni + terms) and the reconstructed ones inferred by our models. CharEmb Base achieves a compression by a factor of 236 and an improvement of +5.3%. as news and crawled websites. Our embeddings trained on only 2 billion tokens outperform embeddings trained over corpora that are up to 2 order of magnitude larger. Further, we see that our staticembedding matrices including terms are providing only a marginal improvement, as the terms do not necessarily cover concepts present in the dataset. After focusing on the raw embedding matrices, we evaluate the quality of our CharEmb models as compressors. In practice, we repeat the same experiments when the best ArXiv-HUPD word embedding matrix is *fully reconstructed* by projecting each vocabulary entry using a character-based model trained with our approach. We report correlations for models based on the Long Short-Term Memory (Hochreiter and Schmidhuber, 1997), because in our experiments, it offered significantly better results than Gated Recurrent Unit (Chung et al., 2014) and Transformer (Vaswani et al., 2017). We report the performance for three model sizes: Small (13MB), *Base* (38MB), and *Large* (86MB). Table 2 shows that our base (38MB) and large (86MB) models compress the embedding matrix they are trained on *and improve its quality* according to the Pearson correlation (similar trend with Spearman). This means that a model of solely 38 MB not only can fully reconstruct the *3.98 GB* matrix it has been trained on, given only its vocabulary (236x space reduction), but also that the reconstructed matrix provides a correlation gain of +5.3% compared to the original one. ## 4.2 Sentence (Contextualized) Embeddings In Table 2 we use our models to reconstruct an original embedding matrix. The reconstructed matrix is used as a static pre-trained embedding matrix: given a phrase in the test of the patent dataset, we compute the representation as the average of the unigrams. Instead, in this section we use our models as *text encoders*, which means performing the inference with our CharEmb models to extract representations of the phrases present in the test set. Following Aslanyan and Wetherbee (2022), we compare our CharEmb variants with the following pre-trained models used as text encoders: BERT (Devlin et al., 2019), Patent-BERT, and the sentence encoder Sentence-BERT (Reimers and Gurevych, 2019) trained on the natural language inference datasets (Bowman et al., 2015; Williams et al., 2018). We augment the proposed baselines with a popular sentence-encoding method SimCSE (Gao et al., 2021), which can be trained with supervision (similarly to S-BERT) or in an unsupervised manner. For the latter, we include the publicly-available variant trained on Wikipedia (Unsupervised SimCSE) and train our own model over the small ArXiv-HUPD dataset (Patent-SimCSE).2 In Table 3, we report the Pearson and Spearman correlation when using text encoders to produce text representations via inferencing to a model. Thanks to our approach, lightweight LSTM-based models outperform larger BERT-based models in a *zero-shot* setup. Our large model is *5x smaller* than Sentence-BERT and SimCSE and yet provides better representations. Training CharEmb does not require any manual annotation, since embeddings are trained with self-supervision and the term extraction is fully unsupervised. *Our smallest model* outperforms all unsupervised approaches. Furthermore, we note that term extraction is a *fundamental component* for the creation of high-quality CharEmb models. When training over unigramonly embeddings, our models performance drops significantly to the levels of BERT. Finally, in Figure 3 we show that our models not only provide the best representations, but also offers substantially lower inference latencies on both high-end GPUs and a single-core CPU. Moreover, training CharEmb Large on an embedding matrix with a vocabulary of five million entries takes only three hours on a single NVIDIA Tesla A100, which is a negligible time compared to the 10 days required to train SimCSE on the same dataset. ## 5 Related Work Static embeddings trained with self-supervised setups became popular with word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). While those algorithms have been originally introduced to embed individual tokens, the approach can be generalized to entire phrases or multiple token entities by preprocessing training corpora such that multiple tokens are merged into one. FastText (Bojanowski et al., 2017) can be seen as an extension to Word2Vec which relies on n-grams to extract representations of unseen text. Contextualized embeddings (e.g., Elmo (Peters et al., 2018)) are created by taking into account the contex of each token. Sentence encoders (Schuster et al., 2019; Cer et al., 2018) are a generalization of contextual embeddings. They can be trained on sentence-level tasks using supervised datasets, such as NLI, or with unsupervised methods (Gao et al., 2021; Wang et al., 2021). Our method to train text 2Additional results when training CharEmb on GloVe, FastText, and Word2Vec embeddings are shown in Appendix B. ![5_image_0.png](5_image_0.png) encoders is fully unsupervised and provides higherquality representations than supervised encoders. Embedding compression is a topic of great interest not only for natural language processing (Pansare et al., 2022; Liao et al., 2020), but also in recommender systems (Zhang et al., 2020; Kim et al., 2020; Shi et al., 2020). The primary goal of our work is not to reduce the size of a static embedding matrix, but rather to generalize the embeddings to entries not seen at training time. Work has been done to align embedding spaces coming from different models (Joulin et al., 2018; Schuster et al., 2019; Grave et al., 2019). Instead of aligning spaces coming from static embeddings and sentence encoders, we introduce text encoders trained to project text in the same space of a static embedding matrix used as a training dataset. ## 6 Conclusion Creating embeddings for terms and phrases is of paramount importance for complex natural language processing platforms targeting highlytechnical domains. While out-of-the-box pretrained sentence encoders are often considered as baselines, representations of *similar quality* can be obtained with substantially lighter and simpler character-based models which are *5 times smaller* in size and *10 times faster* at inference time, even on high-end GPUs. The key to obtaining such results is to realize that large static embedding matrices storing representations for tokens and terms constitute a very rich supervised dataset to train text encoders working at the character level. Since both term extraction and embedding training can be performed without any labeled data, we have proposed a method to train text encoders which does not require any label. Those models are trained with the objective of reconstructing the original embedding matrix and can not only be used as lighter alternatives to sentence encoders, but also as lossless compressors for large embedding matrices. ## References Grigor Aslanyan and Ian Wetherbee. 2022. Patents phrase to phrase semantic matching dataset. *arXiv* preprint arXiv:2208.01171. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the Association for Computational Linguistics*, 5:135–146. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings* of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014. Colin B. Clement, Matthew Bierbaum, Kevin P. O'Keeffe, and Alexander A. Alemi. 2019. On the use of arxiv as a dataset. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Francesco Fusco, Peter Staar, and Diego Antognini. 2022. Unsupervised term extraction for highly technical domains. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), (Industry track). Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with wasserstein procrustes. In *Proceedings of the TwentySecond International Conference on Artificial Intelligence and Statistics*, volume 89 of *Proceedings* of Machine Learning Research, pages 1880–1890. PMLR. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Comput.*, 9(8):1735–1780. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984, Brussels, Belgium. Association for Computational Linguistics. Yeachan Kim, Kang-Min Kim, and SangKeun Lee. 2020. Adaptive compression of word embeddings. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3950– 3959, Online. Association for Computational Linguistics. Siyu Liao, Jie Chen, Yanzhi Wang, Qinru Qiu, and Bo Yuan. 2020. Embedding compression with isotropic iterative quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8336–8343. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, *Advances in* Neural Information Processing Systems 26, pages 3111–3119. Niketan Pansare, Jay Katukuri, Aditya Arora, Frank Cipollone, Riyaaz Shaik, Noyan Tokgozoglu, and Chandru Venkataraman. 2022. Learning compressed embeddings for on-device inference. In *Proceedings* of Machine Learning and Systems 2022, MLSys 2022, Santa Clara, CA, USA, August 29 - September 1, 2022. mlsys.org. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In *Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. Hao-Jun Michael Shi, Dheevatsa Mudigere, Maxim Naumov, and Jiyan Yang. 2020. Compositional embeddings using complementary partitions for memory-efficient recommendation systems. In *Proceedings of the 26th ACM SIGKDD International* Conference on Knowledge Discovery & Data Mining, KDD '20, page 165–175, New York, NY, USA. Association for Computing Machinery. Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, and Stuart M. Shieber. 2022. The harvard uspto patent dataset: A largescale, well-structured, and multi-purpose corpus of patent applications. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Kexin Wang, Nils Reimers, and Iryna Gurevych. 2021. TSDAE: Using transformer-based sequential denoising auto-encoderfor unsupervised sentence embedding learning. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 671–688, Punta Cana, Dominican Republic. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87–94, Online. Association for Computational Linguistics. Caojin Zhang, Yicun Liu, Yuanpu Xie, Sofia Ira Ktena, Alykhan Tejani, Akshay Gupta, Pranay Kumar Myana, Deepak Dilipkumar, Suvadip Paul, Ikuhiro Ihara, Prasang Upadhyaya, Ferenc Huszar, and Wenzhe Shi. 2020. Model size reduction using frequency based double hashing for recommender systems. In Proceedings of the 14th ACM Conference on Recommender Systems, RecSys '20, page 521–526, New York, NY, USA. Association for Computing Machinery. ## A Training Details To perform the experiments described in Table 3 we use pre-trained models publicly available in HuggingFace: ## - **Bert**: bert-large-uncased. ## - **Patent-Bert**: anferico/bert-for-patents. - **Sentence-BERT**: sentence-transformers/all-mpnet-base-v2. - **Supervised-SimCSE**: princeton-nlp/sup-simcse-bert-base-uncased. - **Unsupervised-SimCSE**: princeton-nlp/unsup-simcse-bert-base-uncased. Regarding hyperparameter tuning, the baselines do not need any tuning since all experiments are in a zero-shot fashion. For EmbChar, we only tune | Correlation | | | | | | | | | | |------------------------------------|-----------------------|----------|--------|-------|--------|-------|--------|-------|-------| | Original | Reconstr. | Context. | | | | | | | | | Pre-trained embedding | |Voc.| Size (MB) Dim. | Pear. | Spear. | Pear. | Spear. | Pear. | Spear. | | | | GloVe (6B) | 0.4M | 458 | 300 | 42.37 | 43.95 | 36.82 | 41.15 | 27.36 | 31.36 | | GloVe (42B) | 1.9M | 2,194 | 300 | 40.30 | 45.83 | 29.89 | 42.93 | 20.66 | 21.35 | | GloVe (840B) | 2.2M | 2,513 | 300 | 44.83 | 49.71 | 39.32 | 47.39 | 18.97 | 22.79 | | FastText wiki-news (16B) | 1.0M | 1,144 | 300 | 39.01 | 46.03 | 30.72 | 45.66 | 28.82 | 27.47 | | FastText crawl (600B) | 2.0M | 2,289 | 300 | 47.36 | 49.32 | 45.91 | 49.79 | 34.49 | 35.67 | | Word2Vec news (100B) | 3.0M | 2,861 | 250 | 44.04 | 44.72 | 40.77 | 45.28 | 45.54 | 46.09 | | ArXiv-HUPD uni (2.2B) | 1.8M | 1,403 | 200 | 50.82 | 52.97 | 54.28 | 55.21 | 47.58 | 45.6 | | ArXiv-HUPD uni + terms (1.8B) 5.2M | 3,984 | 200 | 51.62 | 53.91 | 55.97 | 57.27 | 59.84 | 60.52 | | Table 4: Additional results when training *CharEmb Large (86MB)* on standard pre-trained word embedding matrices. Without multiword expression, the reconstruction and contextualized via prediction performance are limited. the encoder by using LSTM, GRU, or Transformer on the validation set. More specifically, we split the word embedding matrix into train and validation sets with a ratio of 80-20. No other hyperparameters have been explored. We stop the training of EmbChar using early-stopping on the validation set when the average cosine similarity has not been improved since 10 epochs. Our hyperparameters are shown in Table 5. We train our word embedding with Word2Vec with CBOW and a window size of 8 and 25 epochs. For FastText, we kept the default parameters. All experiments have been run on the following hardware: - CPU: AMD EPYC 7763 64-core processor. ## - Ram: 1.96 Tb. - Gpu: Nvidia Tesla A100. - OS: Red Hat Enterprise Linux 8.6. ## - **Software**: Pytorch 1.12.1, Cuda 11.6. We emphasise that we train the word embedding matrices on a 16-core virtual machine hosted on AMD EPYC 7742. An epoch takes approximately 25 minutes. Training our embedding matrix *ArXivHUPD uni + terms* requires *less* than 10 dollars of compute budget in the cloud. Training our model EmbChar Large thereafter takes a few hours on a single NVIDIA Tesla A100, costing approximately $5 to $103. In contrast, training SimCSE on the same dataset takes around 10 days. 3The hourly pricing for spot instances with one A100 GPU is in the range $1.25-$1.5 in public cloud offerings. | EmbChar | EmbChar | EmbChar | | |------------------|-----------|-----------|--------| | Hyperparameter | Small | Base | Large | | Hidden dimension | 512 | 512 | 768 | | Number of layers | 1 | 2 | 2 | | Bidirectional | True | True | True | | Dropout | 0.2 | 0.2 | 0.2 | | Learning rate | 0.001 | 0.001 | 0.0005 | | Weight decay | 1e-8 | 1e-8 | 1e-8 | | Batch size | 256 | 256 | 256 | Table 5: The hyperparameter for all EmbChar variants. ## B Additional Results In Table 4 we report the results for the experiments in Section 4 when training our CharEmb Large (86MB) on different word embedding matrices. For each of the embedding matrix considered, we measure the Pearson and Spearman correlation for the three setups: i) the original embedding matrix (*Original*), ii) an embedding matrix reconstructed using the same vocabulary of the original one (*Reconstr.*), and iii) when the embedding of given terms are contextual, i.e., predicted with a CharEmb model trained over the original matrix (*Context.*). The table showcases multiple important findings. First, among the pre-trained models, the vocabulary size plays a significant role to achieve high correlation, with the Word2Vec model with a vocabulary of 3 million entries outperforming embedding matrices that have been trained over larger datasets (e.g., Glove 840B or FastText crawl). Second, the domain of the corpus used to train the embeddings plays a significant role. By training with a corpus of only 2 billion in-domain tokens, an embedding matrix with a vocabulary of 1.8 million entries achieves similar correlation of much larger embedding matrices. Third, our CharEmb model achieves the best performance when trained with an embedding matrix containing embeddings for terms. Predicting the embeddings with our CharEmb model allows to achieve significantly higher correlation than the original matrix containing terms.
wang-etal-2023-cocaclip
{C}oca{CLIP}: Exploring Distillation of Fully-Connected Knowledge Interaction Graph for Lightweight Text-Image Retrieval
https://aclanthology.org/2023.acl-industry.8
Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Coca) technique for cross-modal pre-training distillation. Based on our findings, the resulting CocaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an e-commercial platform further demonstrates the significant effectiveness of CocaCLIP.
# Conaclip: Exploring Distillation Of Fully-Connected Knowledge Interaction Graph For Lightweight Text-Image Retrieval Jiapeng Wang1∗ Chengyu Wang2† Xiaodan Wang3 Jun Huang2 **Lianwen Jin**1† 1South China University of Technology, Guangzhou, China 2Alibaba Group, Hangzhou, China 3Fudan University, Shanghai, China 1eejpwang@mail.scut.edu.cn, eelwjin@scut.edu.cn 2{chengyu.wcy, huangjun.hj}@alibaba-inc.com 3xiaodanwang20@fudan.edu.cn ## Abstract Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP (Radford et al., 2021)) are typically adopted for various vision-language applications, including text-image retrieval. However, these models are still less practical on edge devices or for real-time situations, due to the substantial indexing and inference time and the large consumption of computational resources. Although knowledge distillation techniques have been widely utilized for uni-modal model compression, how to expand them to the situation when the numbers of modalities and teachers/students are doubled has been rarely studied. In this paper, we conduct comprehensive experiments on this topic and propose the fully-Connected knowledge interaction graph (Cona) technique for cross-modal pre-training distillation. Based on our findings, the resulting ConaCLIP achieves SOTA performances on the widely-used Flickr30K and MSCOCO benchmarks under the lightweight setting. An industry application of our method on an ecommercial platform further demonstrates the significant effectiveness of ConaCLIP.1 ## 1 Introduction Text-image retrieval (TIR) aims at retrieving a list of the most relevant images from a large image collection when a specific text query is given. With the rapid development of information interaction and social intercourse, it has been regarded as a crucial component of cross-modal applications and required by various real-world scenarios, such as e-commercial platforms (sites). Recently, inspired by the great success of pretrained language models (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020), research on largescale vision-language pre-training (Tan and Bansal, 2019; Li et al., 2020; Radford et al., 2021; Li et al., 2022; Wang et al., 2022b, 2023) has achieved remarkable progress on a variety of vision-language tasks, including text-image retrieval. These existing methods can be typically classified into two categories according to the model architecture: *crossencoder* and dual-encoder. *Cross-encoder* typically adds additional Transformer (Vaswani et al., 2017) layers to model the deep interaction between image and text representations. It can generally boost the retrieval performance, while resulting in an unbearably slow retrieval speed when applied to the entire image collection since the cross-modal costs are required for each image sample whenever a new text query is given. In contrast, *dual-encoder* encodes the visual and textual inputs in a wholly decoupled manner. The image representation is allowed to be pre-computed and re-used independent of the text queries. Such approaches can also utilize fast approximate nearest neighbor (ANN) search (Muja and Lowe, 2009; Jegou et al., 2010; Johnson et al., 2019) at runtime. Although dual-encoder is usually preferred for real-world applications, the existing related models such as CLIP (Radford et al., 2021) are still less practical on edge devices with limited computing resources, or for the dynamic indexing scenario, e.g., private photos/messages collections (sites). To address this issue, we aim to start from the largescale pre-trained dual-encoder models and focus on the pre-training distillation to present a series of much smaller, faster, and effective counterparts. Knowledge distillation (KD) (Hinton et al., 2014) is proposed to transfer knowledge with soft targets from a teacher to a student in the same modality. MoTIS (Ren and Zhu, 2022) simply repeats intramodal InfoNCE-based (Oord et al., 2018) learning in both text and image domains for distillation. Nevertheless, when the number of modalities doubles for dual-encoder structure, which means text and image teachers as well as text and image students, these methods still only involve intra-modal teacher-student knowledge interaction learning. Instead, in this paper, we comprehensively explore the fully-Connected knowledge interaction graph (Cona) between every possible teacher-student or student-student pair. As shown in Fig. 1, each two-way arrow represents the knowledge interaction learning between the two models it points to. And the aforementioned KD and MoTIS belong to a single *blue* arrow and the two *blue* arrows, respectively. Moreover, in order to better explore the potential of Cona, we implement and investigate various supervision strategies to guide the model optimization, which finally makes each type of learning contribute to the overall improvement. We release various sizes of lightweight dualencoder models named ConaCLIP for different real-world scenarios. Compared with the previous SOTA method (Ren and Zhu, 2022), our ConaCLIP achieves 10.6/12.9/12.8 R@1 gains on Flickr30K/MSCOCO (1K)/MSCOCO (5K) benchmarks under the same model setting. We have also verified its effectiveness on an e-commerce platform. It can achieve 1.44×/1.92∼4.86× inference speed-up with competitive performances given image/text queries. The main contributions of this paper can be summarized as follows: - We propose a new pre-training distillation method with the fully-connected knowledge interaction graph (Cona) for lightweight dualencoder models. - We release a series of lightweight ConaCLIP models to the open-source community, which can significantly surpass previous SOTA models on the widely-used Flickr30K and MSCOCO benchmarks. - We provide a real-world application of this method in real industrial scenarios to further demonstrate its practical values. ## 2 Related Work Cross-encoder (Tan and Bansal, 2019; Li et al., 2019; Chen et al., 2020; Li et al., 2020; Chen et al., 2022) refers to multiple layers of dense crossmodal interactions, e.g., cross-attention (Vaswani et al., 2017), are typically employed to image and text representations for more fine-grained merge and alignment. Although it often achieves superior retrieval accuracy thanks to the patch/token-level integration, the high memory cost and computation inefficiency make it impractical under time-critical real-world settings. Oppositely, for dual-encoder (Zhang et al., 2020; Jia et al., 2021; Radford et al., 2021; Dou et al., 2022), image and text features are encoded into a joint embedding space separately, and the modality interaction is only handled by a simple cosine similarity of the final image and text feature vectors. Such approaches can be regarded as scalable and indexable: the specific choices of encoder architectures can be independent and dynamic, and the late-interaction scheme allows for efficient largescale searching. Pre-training distillation for lightweight dualencoder architecture has been rarely studied. Vanilla knowledge distillation (Hinton et al., 2014) can be referred to as the knowledge transfer from a teacher to a student in the same modality based on soft targets. However, it is a general procedure without awareness and pertinence for crossmodal learning. MoTIS (Ren and Zhu, 2022) separately compresses text or image encoder with an intra-modal contrastive objective that aligns the output embeddings of the student and teacher of each modality, which can be seen as an alternative form of knowledge distillation. Nevertheless, these methods ignore or do not find an appropriate approach to leverage the cross-modal distillation process. Further than them, our method is dedicated to exploring the fully-connected knowledge interaction graph for dual-encoder distillation, which is a natural and effective extension. ## 3 Methodology In this section, we first give the preliminary knowledge, then propose our pre-training distillation framework with Cona. Finally, we introduce various supervision strategies. ## 3.1 Preliminary For the sake of explanation, we abbreviate *text*, image, *teacher* and *student* as T, I, tch and stu respectively. F represents the L2-normalized feature vector outputted by the encoder architecture E. Before student learning, the teachers ET tch and EI tch are commonly first pre-trained using an objective that pushes the embeddings of matched text-image pairs closer while pushing those of non- ![2_image_0.png](2_image_0.png) Image Teacher intra-modal tch-stu *learning* Image Student intra-modal stu-stu *learning* matched ones apart, with large model capacity and massive data. Specifically, CLIP (Radford et al., 2021) takes the InfoNCE (Oord et al., 2018) loss as the supervision form. Without losing generality, given two outputted feature vectors F aand F b ∈ RN×d, we define that: $$\begin{array}{c}{{p_{i,j}(F^{a},F^{b})=\frac{e x p(F_{i}^{a}F_{j}^{b\top}/\tau)}{\sum_{k}e x p(F_{i}^{a}F_{k}^{b\top}/\tau)},}}\\ {{{\mathcal{L}}_{F^{a}\to F^{b}}^{\mathrm{InfoNCE}}=-\frac{1}{N}\sum_{i=1}^{N}l o g(p_{i,i}(F^{a},F^{b})),}}\end{array}\tag{1}$$ where N is the mini-batch size, d is the channel size and τ is the temperature hyper-parameter. The final loss of CLIP can be formulated as: $${\mathcal{L}}^{\mathrm{CLIP}}={\mathcal{L}}_{F_{\mathrm{{tch}}}^{T}\to F_{\mathrm{{tch}}}^{I}}^{\mathrm{InfoNCE}}+{\mathcal{L}}_{F_{\mathrm{{tch}}}^{I}\to F_{\mathrm{{tch}}}^{T}}^{\mathrm{InfoNCE}}\;.$$ . (3) Next, the pre-training distillation of students ET stu and EI stu begins, with parameters of teachers ET tch and EI tch frozen. MoTIS (Ren and Zhu, 2022) also adopts the InfoNCE-based loss at this stage, and implements it in both text and image domains separately: $${\mathcal{L}}^{\mathrm{MoTIS}}={\mathcal{L}}_{F_{\mathrm{stu}}^{T}\to F_{\mathrm{tch}}^{T}}^{\mathrm{InfoNCE}}+{\mathcal{L}}_{F_{\mathrm{stu}}^{I}\to F_{\mathrm{tch}}^{I}}^{\mathrm{InfoNCE}}\,.$$ * [16] A. A. K. . (4) According to the subscript in Eq. (4), it is easy to see that MoTIS only involves *intra-modal teacherstudent* learning. ## 3.2 Pre-Training Distillation With Cona Unlike existing works, our method introduces the fully-connected knowledge interaction graph (Cona) for pre-training distillation. Apart from intra-modal teacher-student learning, our method also includes *intra-modal student-student* learning, *inter-modal teacher-student* learning and *intermodal student-student* learning, as shown in Fig. 1. This fully-connected learning graph established for students ET stu and EI stu serves as an integration of multi-view and multi-task learning schemes, which can strengthen the robustness and effectiveness (Caruana, 1997; Luong et al., 2016; Aghajanyan et al., 2021) required by pre-trained models. We suggest that each type of learning process in Cona should be concretely implemented in detailed supervision strategies. Therefore, we propose and investigate various supervision strategies in the next subsection. ## 3.3 Supervision Strategies Here we continue to use F aand F b(prediction) along with Ffa and Ffb(target) as placeholders for illustration, and present the following effective supervision strategies: InfoNCE loss is a type of contrastive loss function. It has been formulated in Eq. (2), and successfully applied for distillation by Eq. (4). Feature-wise distance (FD) loss directly minimizes the distance between feature vectors. We utilize squared L2-norm as the measure: $${\mathcal{L}}_{F^{a}\Leftrightarrow F^{b}}^{\mathrm{FD}}={\frac{1}{2}}{\frac{1}{N d}}\sum_{i=1}^{N}\sum_{j=1}^{d}{(F_{i,j}^{a}-F_{i,j}^{b})}^{2}.\quad(5)$$ $$(3)$$ Similarity-wise distance (SD) loss minimizes the distance criterion between similarity matrices: $$\begin{array}{c}\includegraphics[width=140.0pt]{28.45}\end{array}\tag{6}$$ Since F a, F b, Ffa and Ffb have been L2normalized, the values of cosine-similarities F a i F b j ⊤and Ffa i Ffb j ⊤are in the range [−1, 1]. The distance between prediction F a i F b j ⊤and target Ffa i Ffb j ⊤needs to be shortened. Hence, the squared L2-norm is also adopted here. KL-Div loss uses the Kullback–Leibler divergence to measure the difference between the predicted and the target probability distributions. Given pi,j acquired by softmax operation shown in Eq. (1), it | Learning Type | Supervision Strategies | | | | | | | | | | | | | | | | |--------------------------------|--------------------------------|--------------------------------|--------------------------------|-------------------------------|--------------------------------|-------------------------------|--------|--------|-----|----|--------|-------------------------|------|----|----|----| | InfoNCE | FD | SD | KL-Div | Sym-SD | Sym-KL-Div | | | | | | | | | | | | | SD | KL-Div | KL-Div | | | | | | | | | | | | | | | | intra-modal stu-stu learning | \ | \ | L F stu→F T stu⇔F T tch→F T T | L F stu→F T stu∥F T tch→F T T | SD | L F stu→F T stu∥F T stu→F I I | | | | | | | | | | | | SD | tch | KL-Div | tch | L T | T | I | I | KL-Div | stu | | | | | | | | | +L F stu→F I stu⇔F I tch→F I I | +L F stu→F I stu∥F I tch→F I I | F stu→F stu⇔F stu→F stu | +L F stu→F I stu∥F I stu→F T T | | | | | | | | | | | | | | | tch | tch | stu | | | | | | | | | | | | | | | | InfoNCE | SD | KL-Div | | | | | | | | | | | | | | | | inter-modal stu-stu learning | L F stu→F T I | FD | L F stu→F T stu⇔F I tch→F T I | L F stu→F T stu∥F I tch→F T I | | | | | | | | | | | | | | InfoNCE stu | L T | I | SD | tch | KL-Div | tch | \ | \ | | | | | | | | | | +L F stu→F I T | F stu⇔F stu | +L F stu→F I stu⇔F T tch→F I T | +L F stu→F I stu∥F T tch→F I T | | | | | | | | | | | | | | | stu | tch | tch | | | | | | | | | | | | | | | | InfoNCE | FD | SD | KL-Div | KL-Div | | | | | | | | | | | | | | intra-modal tch-stu learning | L F stu→F T T | L F stu⇔F T T | L F stu→F T tch⇔F T tch→F T T | L F stu→F T tch∥F T tch→F T T | L F stu→F T tch∥F T stu→F I I | | | | | | | | | | | | | InfoNCE tch | FD | tch | SD | tch | KL-Div | tch | L SD T | T | I | I | KL-Div | tch | | | | | | +L F stu→F I I | +L F stu⇔F I I | +L F stu→F I tch⇔F I tch→F I I | +L F stu→F I tch∥F I tch→F I I | F stu→F tch⇔F stu→F tch | +L F stu→F I tch∥F I stu→F T T | | | | | | | | | | | | | tch | tch | tch | tch | tch | | | | | | | | | | | | | | InfoNCE | FD | SD | KL-Div | KL-Div | | | | | | | | | | | | | | inter-modal tch-stu learning | L F stu→F T I | L F stu⇔F T I | L F stu→F T tch⇔F I tch→F T I | L F stu→F T tch∥F I tch→F T I | L F stu→F T tch∥F I stu→F I T | | | | | | | | | | | | | InfoNCE tch | FD | tch | SD | tch | KL-Div | tch | L SD T | I | I | T | KL-Div | tch | | | | | | +L I | T | +L I | T | +L I | T | I | T | +L I | T | I | T | F stu→F tch⇔F stu→F tch | +L I | T | T | I | | F stu→F tch | F stu⇔F tch | F stu→F tch⇔F tch→F tch | F stu→F tch∥F tch→F tch | F stu→F tch∥F stu→F tch | | | | | | | | | | | | | minimizes the following optimization objective: $$\begin{array}{l}\mathcal{L}_{Fa\to F^{b}}^{\text{KL-Div}}\\ \mathcal{F}^{a}\to F^{b}\|\overline{F^{a}}\to\overline{F^{b}}\end{array}=\frac{1}{N}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N}p_{i,j}(F^{a},F^{b})log\frac{p_{i,j}(F^{a},F^{b})}{p_{i,j}(\overline{F^{a}},F^{b})}.\tag{7}$$ It is worth noting that, when performing the learning process indicated by an arrow shown in Fig. 1, the common practice is to use teachers' outputs F T tch and F I tch as target in Eq. (6)(7) that students learn from. While in our case with two modalities available, we propose to use the paired arrow as the target, and we call this the **symmetric version** (for SD loss and KL-Div loss). For example, inter-modal teacher-student learning implemented with KL-Div loss can be formulated as $$\begin{array}{l}\mbox{KL-Div}\\ \mbox{$F_{\rm stu}^{T}\!\rightarrow\!F_{\rm tch}^{I}\|F_{\rm tch}^{T}\!\rightarrow\!F_{\rm tch}^{I}$}\ +\ \mbox{$\cal L_{\rm stu}^{\rm KL-Div}$}\\ \mbox{$F_{\rm stu}^{T}\!\rightarrow\!F_{\rm tch}^{I}\|F_{\rm tch}^{T}\!\rightarrow\!F_{\rm tch}^{T}$}\ +\ \mbox{$\cal L_{\rm stu}^{\rm KL-Div}$}\end{array}\tag{8}$$ while its symmetric version is $$\begin{array}{l}\mathcal{L}_{F_{\rm stu}^{T}\to F_{\rm tch}^{I}\|F_{\rm stu}^{I}\to F_{\rm tch}^{T}\ +\ \mathcal{L}_{F_{\rm stu}^{I}\to F_{\rm tch}^{T}\|F_{\rm stu}^{T}\to F_{\rm tch}^{I}\.}\end{array}\tag{9}$$ This modification deepens the interaction between the four encoders during optimization. So far, any one of the learning types can be concretely implemented by any one of the supervision strategies, except for a few meaningless combinations. Detailed loss functions are listed in Tab. 1. ## 4 Experiments 4.1 Setup We use Conceptual Caption (CC3M) (Sharma et al., 2018) and Conceptual 12M (CC12M) (Changpinyo et al., 2021) for pre-training distillation, which consist of 3M and 12M noisy text-image pairs respectively. During fine-tuning, we use MSCOCO (Lin et al., 2014) and Flickr30K (Plummer et al., 2015) as benchmarks. MSCOCO has 113,287 images for training, 5K images for validation, and both 5K and 1K for testing. Flickr30K has 28,783 images for training, 1K images for validation, and 1K for testing. Following previous works, we use recall R@k (k=1,5,10) as the main metric. We use the open-source CLIP (Radford et al., 2021) with ViT-B/32 (Dosovitskiy et al., 2020) as the teacher model. Its image encoder is a 12-layer ViT with the hidden size to be 768 and 12 attention heads. Its text encoder is a 12-layer Transformer with hidden size to be 512 and 8 attention heads. For the student model, we use ViT-S/16 with hidden size to be 384 as the image encoder, and initialize it from the pre-trained weights on ImageNet21K (Ridnik et al., 2021). For the text encoder, we experiment with 2, 4 and 6-layer Transformer, of which the weights are initialized from the first corresponding layers of the teacher's text encoder. The details of model settings are shown in Tab. 6. In pre-training distillation, we train the student models in 4 epochs using AdamW (Loshchilov and Hutter, 2018) with a batch size of 1024 for both images and texts, the learning rate of 3e-4, and the weight decay of 0.1. We employ a cosine learning rate scheduler with 10,000 warm-up steps. In finetuning, we use the same optimization setting as in MoTIS (Ren and Zhu, 2022). Experiments are conducted on 4 NVIDIA TESLA V100 32G GPUs. ## 4.2 Ablation Study Considering our complete pre-training distillation takes a relatively long time, we follow the setup of (Ren and Zhu, 2022) and train ConaCLIP on CC3M for 1 epoch with batch size 84 to conduct the ablation study. Taking Eq. (4) as the naive baseline, we aim to find out which of the proposed | Learning Type | Supervision Strategies | | | | | | |------------------------------|--------------------------|----------------|----------------|----------------|----------------|----------------| | InfoNCE | FD | SD | KL-Div | Sym-SD | Sym-KL-Div | | | intra-modal stu-stu learning | \ | \ | 58.8/83.7/90.1 | 57.1/82.7/88.8 | 57.1/82.0/89.2 | 56.8/81.6/88.6 | | inter-modal stu-stu learning | 34.7/58.7/69.9 | 56.6/82.1/88.8 | 58.6/83.6/90.0 | 56.5/82.4/88.9 | \ | \ | | intra-modal tch-stu learning | 57.6/82.4/89.0† | 57.6/82.0/88.4 | 58.5/83.2/89.6 | 55.1/80.0/87.4 | 58.7/83.4/89.9 | 56.3/81.5/88.3 | | inter-modal tch-stu learning | 51.4/76.3/83.8 | 50.0/80.7/88.4 | 57.6/82.5/88.6 | 56.9/81.8/88.7 | 56.9/81.8/88.7 | 59.1/83.4/89.8 | Table 2: Ablation study of text-image retrieval R@1/5/10 on Flickr30K. †Baseline. **Bold** denotes all R@ks have obvious improvements. All five losses **in bold** will be added to the baseline loss to finally serve as our framework. | Model | Text | Image | Flickr30K | MSCOCO (1K) | MSCOCO (5K) | | | | | | |---------------------------------------------------------------|------------------|---------|-------------|---------------|---------------|------|------|------|------|------| | Encoder | Encoder | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | (a) Fair Comparisons InfoNCE-based | 38.4 | 68.0 | 78.0 | 53.3 | 85.3 | 93.5 | 31.5 | 60.3 | 73.3 | | | Cross-modal KD | 41.1 | 70.6 | 80.0 | 54.9 | 86.0 | 93.6 | 33.4 | 61.9 | 74.4 | | | CLIP's[512/6] | ViT-S/16[384/12] | | | | | | | | | | | MoTIS | 57.0 | 82.1 | 88.8 | 62.7 | 88.2 | 94.5 | 42.6 | 69.6 | 79.4 | | | ConaCLIP (Ours) | 60.6 | 85.2 | 91.2 | 68.6 | 92.4 | 96.7 | 47.3 | 76.1 | 85.2 | | | (b) Model Zoo and Benchmarks ConaCLIP-6L (Ours) CLIP's[512/6] | 67.6 | 89.6 | 94.4 | 75.6 | 94.6 | 97.4 | 55.4 | 83.5 | 89.9 | | | ConaCLIP-4L (Ours) | CLIP's[512/4] | 67.0 | 89.3 | 94.2 | 75.4 | 94.6 | 97.4 | 55.3 | 83.1 | 89.9 | | ViT-S/16[384/12] | | | | | | | | | | | | ConaCLIP-2L (Ours) | CLIP's[512/2] | 65.6 | 89.2 | 93.9 | 74.7 | 94.3 | 97.3 | 54.1 | 82.2 | 89.4 | Table 3: (a) Fair comparisons of text-image retrieval results on Flickr30K and MSCOCO (1K and 5K). (b) Our model zoo and the corresponding benchmarks. **Bold** indicates the best performance. "[m/n]" represents n layers with the hidden size to be m. combinations of learning types and supervision strategies can bring further improvements. The fine-tuned results on Flickr30K is shown in Tab. 2. We can make some observations that: 1) With an appropriate choice of detailed supervision strategies, each type of learning can further bring obvious improvements on the basis of the baseline. 2) The effect of each learning type is greatly affected by the implemented loss function. It also indicates that the pre-training distillation process should be carefully explored regarding the supervision strategy. 3) Our proposed symmetric version losses (Sym-SD and Sym-KL-Div) can generally achieve superior performances to the standard ones for (intra/inter-modal) teacher-student learning. We can also attain several findings that: 1) For (intra/inter-modal) student-student learning where students first make knowledge interaction and then learn together from teachers, SD loss performs the best. Because the actual retrieval application uses this cosine similarity to rank candidates, it can help students acquire goal-oriented knowledge more directly. It also relaxes the learning task of students from teachers' feature space to the similarity space. 2) For (intra/inter-modal) teacher-student learning, our proposed symmetric version losses are more suitable. Compared with the standard losses, they make the knowledge interaction between teachers and students closer during optimization. In this regard, student encoders can cooperate more intimately in downstream tasks. 3) Although the naive intra-modal teacher-student learning with InfoNCE loss can serve as a competent baseline, the addition of SD and Sym-SD losses of the same learning type can complement its effectiveness. On the other hand, the other three different learning types with proper loss choices can also benefit the effect of pre-training distillation. More findings on distilling intermediate layers are shown in A.2. Our method has been established with the further integration of the highlight (in bold) combinations in Tab. 2 based on the baseline. The effect after full integration is shown in Tab. 3(a). ## 4.3 Performance Fair Comparisons. In order to better verify the effectiveness of ConaCLIP, besides the previous SOTA, we also experiment with two strong baseline methods. As shown in Tab. 3(a), *InfoNCE-based* indicates the naive cross-modal contrastive learning procedure. *Cross-modal KD* represents distilling the cross-modal in-batch probability distribution of teachers into students. All these experiments | Model | Text-Image Retrieval | Image-Text Retrieval | Disk Space (MB) | QPSt | QPSi | | | | | |----------------|------------------------|------------------------|-------------------|--------|--------|------|-----|-------|-------| | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | | | | CLIP | 16.5 | 48.0 | 61.3 | 18.0 | 49.7 | 62.2 | 578 | 1.00× | 1.00× | | EC-CLIP | 25.0 | 63.5 | 76.0 | 25.9 | 64.1 | 75.7 | 578 | 1.00× | 1.00× | | EC-ConaCLIP-6L | 24.3 | 62.4 | 75.6 | 24.8 | 62.4 | 73.9 | 254 | 1.92× | 1.44× | | EC-ConaCLIP-4L | 23.6 | 61.1 | 74.3 | 22.0 | 59.7 | 72.2 | 230 | 2.71× | 1.44× | | EC-ConaCLIP-2L | 23.0 | 60.7 | 73.5 | 21.8 | 59.3 | 72.0 | 206 | 4.86× | 1.44× | are conducted under the pre-training setup of (Ren and Zhu, 2022) for fair comparisons. As can be observed, 1) Cross-modal KD which introduces the knowledge distillation process obviously outperforms the standard InfoNCE-based approach. 2) MoTIS greatly surpasses InfoNCE-based and Cross-modal KD. This reveals the superiority of intra-modal teacher-student learning over intermodal student-student learning in the case of dualencoder distillation. 3) Our ConaCLIP shows significant improvements compared with competitors on all evaluation metrics: 3.6/3.1/2.4 R@1/5/10 gains on Flickr30K, 5.9/4.2/2.2 R@1/5/10 gains on MSCOCO (1K) and 4.7/6.5/5.8 R@1/5/10 gains on MSCOCO (5K). This fully demonstrates the effectiveness of our distillation framework with Cona. Model Zoo and Benchmarks. In order to better promote the development of cross-modal textimage research, we release a series of lightweight dual-encoder models. Their benchmark results are shown in Tab. 3(b). In this case, the power of ConaCLIP is further unlocked and brings further improvements. Specifically, even ConaCLIP-2L can achieve 8.6/7.1/5.1 R@1/5/10 gains on Flickr30K, 12.0/6.1/2.8 R@1/5/10 gains on MSCOCO (1K) and 11.5/12.6/10.0 R@1/5/10 gains on MSCOCO (5K) compared with the previous SOTA. We have also found that the capacity of the text encoder may have limited effects on these performances. For example, ConaCLIP-4L can achieve competitive results with ConaCLIP-6L, and ConaCLIP-2L has only minor drops. ## 5 Industry Application We apply the proposed technique to end-to-end cross-modal retrieval in an e-commerce platform, where we vectorize the search queries and the products and then perform product retrieval and ranking with nearest-neighbor search (Muja and Lowe, 2009; Jegou et al., 2010; Johnson et al., 2019), as shown in Fig. 2. We first collect massive data of text-image pairs from e-commerce products in our platform, where the titles of products can act as text information. We utilize most of the data to pre-train an e-commerce version of the CLIP model (denoted as EC-CLIP) with ViT-B/32 as the image encoder, which is overly large for online deployment. For the remaining data, we utilize 3M pairs for distilling the lightweight EC-ConaCLIP. To evaluate its effectiveness, we hold out a separate set of 100K pairs for fine-tuning and 5K/5K pairs used in validating/testing. In this set of experiments, we train EC-ConaCLIP for 20 epochs in pretraining distillation, and fine-tune both EC-CLIP and EC-ConaCLIP for 5 epochs. The remaining settings are the same as in Section 4.1. In apart to the R@k metric, we also report the disk space (MB) and the acceleration rate of Query Per Second (QPSi for image and QPSt for text) to evaluate model's memory footprints and inference speed. In Tab. 4, we report the averaged results where the inference speed is tested on an NVIDIA TESLA V100 (16G) GPU. As seen, the compressed EC-ConaCLIP-6L only takes 44% disk space (254MB) of EC-CLIP meanwhile being 1.44×/1.92× faster with image/text queries. It also performs on par with EC-CLIP. Our ECConaCLIP-2L can further achieve up to 4.86× inference speed-up with text queries, and 64% size reduction (from 578MB to 206MB). We provide some case studies in A.4. ## 6 Conclusion In this paper, we propose Cona for pre-training distillation with dual-encoder architecture. It gathers every type of knowledge interaction learning with appropriate supervision choice to benefit the cross-modal distillation. The resulting ConaCLIP achieves superior performances on both general benchmarks and industry applications. For future work, we will explore more variants of visual encoders, and continue to tap the potential of dual-encoder distillation. ## Acknowledgements This research is supported in part by Alibaba Innovative Research Foundation (No. D8200510), NSFC (Grant No. 61936003), Zhuhai Industry Core and Key Technology Research Project (No. 2220004002350). This work is also supported by Alibaba Cloud Group, through Research Talent Program with South China University of Technology. ## References Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In *EMNLP*, pages 5799–5811. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *NeurIPS*, 33:1877–1901. Rich Caruana. 1997. Multitask learning. *Machine* Learning, 28(1):41–75. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12M: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In *CVPR*. Feilong Chen, Xiuyi Chen, Shuang Xu, and Bo Xu. 2022. Improving cross-modal understanding in visual dialog via contrastive learning. In *ICASSP*, pages 7937–7941. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: Universal image-text representation learning. In *ECCV*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional Transformers for language understanding. In *NAACL-HLT*, pages 4171–4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, et al. 2022. An empirical study of training end-to-end vision-and-language transformers. In *CVPR*, pages 18166–18176. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2014. Distilling the knowledge in a neural network. NeurIPS Deep Learning Workshop. Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. *IEEE TPAMI*, 33(1):117–128. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *ICML*, pages 4904–4916. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In *ICML*, pages 12888–12900. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A simple and performant baseline for vision and language. *arXiv preprint arXiv:1908.03557*. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *ECCV*, pages 121–137. Xuewei Li, Songyuan Li, Bourahla Omar, Fei Wu, and Xi Li. 2021. ResKD: Residual-guided knowledge distillation. *IEEE TIP*, 30:4735–4746. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In *ECCV*, pages 740– 755. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In *ICLR*. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. *ICLR*. Marius Muja and David G Lowe. 2009. Fast approximate nearest neighbors with automatic algorithm configuration. *VISAPP*, 2(331-340):2. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In *ICCV*, pages 2641–2649. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *ICML*, pages 8748– 8763. Siyu Ren and Kenny Zhu. 2022. Leaner and faster: Twostage model compression for lightweight text-image retrieval. In *NAACL-HLT*, pages 4085–4090. Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. 2021. ImageNet-21K pretraining for the masses. In *NeurIPS Datasets and Benchmarks* Track. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, pages 2556–2565. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from Transformers. In *EMNLP*, pages 5100–5111. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *NeurIPS*, 30. Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, and Wei Lin. 2022a. Easynlp: A comprehensive and easy-to-use toolkit for natural language processing. In *EMNLP*, pages 22–29. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022b. OFA: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In *ICML*, pages 23318–23340. Xiaodan Wang, Lei Li, Zhixu Li, Xuwu Wang, Xiangru Zhu, Chengyu Wang, Jun Huang, and Yanghua Xiao. 2023. AGREE: aligning cross-modal entities for image-text retrieval upon vision-language pre-trained models. In *WSDM*, pages 456–464. Quanzheng Xu, Liyu Liu, and Bing Ji. 2022. Knowledge distillation guided by multiple homogeneous teachers. *Information Sciences*, 607:230–243. Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. 2020. DeVLBert: Learning deconfounded visio-linguistic representations. In *ACMMM*, pages 4373–4382. Yichen Zhu and Yi Wang. 2021. Student customized knowledge distillation: Bridging the gap between student and teacher. In *ICCV*, pages 5057–5066. ## A Appendix A.1 Model Settings We give detailed parameters on the settings of our ConaCLIP models in Tab. 6, such as the number of parameters, layers, heads, etc. | Model Setting | ConaCLIP-6L ConaCLIP-4L ConaCLIP-2L | | | |---------------------------|---------------------------------------|-------|-------| | Number of Parameters | 66M | 60M | 53M | | Text Encoder Layers | 6 | 4 | 2 | | Text Encoder Heads | 8 | 8 | 8 | | Text Encoder Hidden Size | 512 | 512 | 512 | | Vocabulary Size | 49408 | 49408 | 49408 | | Text Length | 77 | 77 | 77 | | Image Encoder Layers | 12 | 12 | 12 | | Image Encoder Heads | 6 | 6 | 6 | | Image Encoder Hidden Size | 384 | 384 | 384 | | Image Patch Size | 16 | 16 | 16 | | Image Size | 224 | 224 | 224 | ## A.2 Negative Results On Distilling Intermediate Layers We also present an exploratory study on distilling the knowledge of intermediate layers from teacher encoders. We first evenly divide the encoder of each student/teacher into six parts along the number of layers, and then perform our distillation technique on the feature representations of each part. The experiment results are shown in Tab. 5. We can observe that additional distillation with features of the intermediate layers does not bring about positive improvement. This inspires us that we should mainly focus on the representation matching ability of the output of the last layer for the cross-modal retrieval task. Due to the difference of capabilities between models of different sizes, they can choose different paths to learn the goal-oriented features in the same task during distillation (Li et al., 2021; Zhu and Wang, 2021; Xu Applied Parts R@1 R@5 R@10 6th (Baseline) **60.6 85.2 91.2** 5-6 59.5 84.2 90.7 4-6 59.5 84.3 90.9 3-6 57.9 83.5 90.2 2-6 58.6 84.1 90.9 1-6 59.2 84.5 90.8 et al., 2022). In our application, we suggest that it can be inappropriate to force small models to learn the same path as the large ones. ## A.3 Application In E-Commerce Product Retrieval We apply the proposed distillation technique to end-to-end cross-modal retrieval in an e-commerce platform, where we vectorize the search queries and the products and then perform product retrieval and ranking with nearest-neighbor search. The whole framework is shown in Fig. 2. ![8_image_0.png](8_image_0.png) Custom leisure handbag Fashion luxury **watch** N e a r est-N eig h b o r R etrie v …… Custom leisure **handbag** Index of Vectorized **Queries** Plaid down **jacket** …… Index of Vectorized **Products** …… Ranking **List** ## A.4 Case Study ![8_image_1.png](8_image_1.png) Tab. 7 shows the case studies in our e-commerce retrieval scenario. For the same text query, we show the top-1 image retrieval results of the opensource CLIP model and our EC-ConaCLIP model respectively. From these cases, we can find that our model can better capture conceptual and fine-grained fashion information during cross-modal text-image retrieval, and maintain the cross-modal alignment effect of text-image samples after the lightweight distillation. For example, in Case 1, our model more accurately captures the cartoon subject in the target commodity as "unicorn". In Case 2, our model pays more attention to fine-grained information "2 layers double handle", while maintaining the correct perception of other information such as "Stainless steel", "steamers pot" and "with lid". In Case 3, our EC-ConaCLIP better captures the color clue of "Hot red". Although the retrieval result of CLIP also conforms to the information of "sports tights high waist yoga pants", its color is more like "dark red". Based on our distillation technique, the resulting model can sufficiently learn the perception ability of the teacher model about commodity fashion concepts and reduce matching errors.
jia-etal-2023-kg
{KG}-{FLIP}: Knowledge-guided Fashion-domain Language-Image Pre-training for {E}-commerce
https://aclanthology.org/2023.acl-industry.9
Various Vision-Language Pre-training (VLP) models (e.g., CLIP, BLIP) have sprung up and dramatically advanced the benchmarks for public general-domain datasets (e.g., COCO, Flickr30k). Such models usually learn the cross-modal alignment from large-scale well-aligned image-text datasets without leveraging external knowledge. Adapting these models to downstream applications in specific domains like fashion requires fine-grained in-domain image-text corpus, which are usually less semantically aligned and in small scale that requires efficient pre-training strategies. In this paper, we propose a knowledge-guided fashion-domain language-image pre-training (FLIP) framework that focuses on learning fine-grained representations in e-commerce domain and utilizes external knowledge (i.e., product attribute schema), to improve the pre-training efficiency. Experiments demonstrate that FLIP outperforms previous state-of-the-art VLP models on Amazon data and on the Fashion-Gen dataset by large margins. FLIP has been successfully deployed in the Amazon catalog system to backfill missing attributes and improve the customer shopping experience.
# Kg-Flip: Knowledge-Guided Fashion-Domain Language-Image Pre-Training For E-Commerce Qinjin Jia†1∗ Yang Liu†B2 Shaoyuan Xu2 **Huidong Liu**2 Daoping Wu3∗ Jinmiao Fu2 Roland Vollgraf2 **Bryan Wang**2 1 North Carolina State University 2 Amazon.com, Inc. 3Iowa State University †Equal contribution B Corresponding author: {yliuu@amazon.com} ## Abstract Various Vision-Language Pre-training (VLP) models (e.g., CLIP, BLIP) have sprung up and dramatically improved the benchmarks of public general-domain datasets (e.g., COCO, Flickr30k). Such models typically learn the cross-modal alignment from large-scale wellaligned image-text datasets. Adapting these models to downstream applications in specific domains, such as fashion, requires fine-grained in-domain image-text datasets. However, such datasets are usually less semantically aligned and smaller in scale, which requires more efficient pre-training strategies. In this paper, we propose a knowledge-guided fashion-domain language-image pre-training (KG-FLIP) framework that focuses on learning fine-grained representations in the e-commerce domain and utilizes external knowledge (i.e., product attribute schema) to improve the pre-training efficiency. Experimental results demonstrate that KG-FLIP outperforms previous state-of-the-art VLP models on Amazon data and the FashionGen dataset by large margins. KG-FLIP has been successfully deployed in the Amazon catalog system to backfill missing attributes and improve the customer shopping experience. ## 1 Introduction Modern e-commerce websites exhibit products with multi-modal information (e.g., product images, product titles, and product bullet points) to inform customers' purchase decisions. The effective exploitation of such multi-modal product information is crucial for product understanding and downstream vision-language (VL) applications, such as product categorization, search, and recommendation. Meanwhile, recent large-scale visionlanguage pre-training (VLP) models have led to impressive performance improvements on many general-domain VL tasks (Radford et al., 2021; Yu et al., 2022). As a result, there has been a surge of *Work done during internship at Amazon. interest in adapting such VLP models to facilitate various applications in e-commerce scenarios. Unlike the well-aligned coarse-grained languageimage datasets in the general domain, the paired data in the e-commerce domain have two characteristics. First, both of the product titles/descriptions and the images contain richly detailed (i.e., finegrained) product information compared to datasets in the general domain. Second, the product textual information and images usually share only partial information while containing complementary information (i.e., not well-aligned). Thus, an effective pre-training method needs to align the common portion and fuse the distinct facts from each modality in a fine-grained manner. Rather than aligning the entire image and text pair at a global level using contrastive loss as CLIP (Radford et al., 2021) does, we designed our pre-training tasks to focus on a finer level of text tokens and image patches. In addition, previous VLP methods relied solely on the inductive bias of the model to align crossmodality representations through vast amounts of paired data. Such an approach is data-hungry, inefficient, and disregards the availability of structured product knowledge. Thus, we propose to leverage existing knowledge in the e-commerce catalog to facilitate such alignment. Specifically, for each type of product (e.g., dress), the catalog stores its applicable attributes (e.g., neckline style) and enumerated attribute values (e.g., v-neck, crewneck). Such attribute knowledge can serve as anchor points to help VLP models efficiently acquire salient semantic relations between modalities. To address the above challenges, we propose KG-FLIP: a knowledge-guided Fashion-domain Language-Image Pre-training to improve the VLP models for e-commerce data. The design of KGFLIP is inspired by the state-of-the-art generalpurpose VLP model BLIP [6]. We adapt its design for our use case by 1) replacing the widelyused image-text contrastive (ITC) objective with the masked language-image modeling (MLIM) pretraining objective, to facilitate multi-model fusion at the token level instead of cross-model alignment; 2) leveraging the structured knowledge of product attribute schema information to guide the pretraining process, and facilitate the VLP model to learn more fine-grained product representations. These enhancements can be generalized to other real-world applications, where image-text pairs are not well-aligned in semantics and external knowledge can be leveraged to guide the pre-training. ## 2 Related Work 2.1 Vision-Language Pre-Training The emergence of large-scale pre-training models (e.g., BERT (Devlin et al., 2018), ViT (Kim et al., 2021)) has significantly advanced the state of the art across various uni-modal domains, such as natural language processing (NLP), computer vision (CV), and speech recognition (SR). Recently, researchers have introduced the pre-training and-then finetuning paradigm into the vision-language (VL) domain for solving multi-modal tasks, which requires models to comprehend both the input image and text contents (Dou et al., 2022). Existing visionlanguage pre-training (VLP) models (e.g., CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021), Flamingo (Alayrac et al., 2022)) have proven to be highly effective on various downstream VL tasks, such as image retrieval (IR), text retrieval (TR), and visual question answering (VQA). Consequently, VLP has become the de facto practice to tackle multi-modal problems because of its superior performance (Dou et al., 2022; Chen et al., 2023). Existing VLP models can be divided into two categories: object-detector (OD)-based VLP models (e.g., LXMERT (Tan and Bansal, 2019), UNITER (Chen et al., 2020), OSCAR (Li et al., 2020)) and end-to-end VLP models (e.g., ALIGN (Jia et al., 2021), ALBEF (Li et al., 2021), METER (Dou et al., 2022)). OD-based VLP models rely on pretrained object detectors to extract region-based image features, and then utilize a multi-modal encoder to fuse the image features with text tokens. While OD-based VLP models have brought impressive performance, crafting the pre-trained object detectors for them is both annotation-expensive and computation-expensive, because it requires bounding box annotations for pre-training and highresolution images during inference (Li et al., 2021). On the other hand, end-to-end VLP models directly feed image patch features into a pre-trained ViT model, which eliminates the need for costly annotations and significantly improves inference speed, and have been adopted by the more recent work (Chen et al., 2021; Kim et al., 2021). Thus, we focus on end-to-end VLP models in this work. ## 2.2 Knowledge-Enhanced Vision-Language Pre-Training Recently, there has been a surge of interest in utilizing domain knowledge (e.g., knowledge graph, keywords) to guide VLP in order to reach better performance and improve the pre-training efficiency. For example, Chen et al. (2021) proposed to incorporate knowledge graph (KG) embeddings into VLP models to enhance the learning of semantically aligned and knowledge-aware representations. Although their experimental results demonstrated that KG could benefit VLP, it requires object tags in each image to construct domain-specific KGs. Zhu et al. (2021) presented a knowledgeperceived multi-modal pre-training model in ecommerce that uses product attribute information as the third modality in addition to the visual and linguist modalities. However, this approach requires complete and low-noise product attribute information, and its downstream tasks also require such quality product attribute information to be available as input. This implies increased annotation costs and reduces the scope of the VLP model for use on downstream tasks or data. Considering that product attribute information is usually incomplete and noisy in the real world, we think existing knowledge-enhancement approaches are not optimal, because they either require additional labeling efforts or introduce additional noise to VLP models. Thus, we propose to use attribute information to improve the pre-training efficiency of VLP. ## 3 Method This section delineates KG-FLIP. Section 3.1 introduces the architecture of KG-FLIP. Then, Section 3.2 presents pre-training objectives of the model. Finally, Section 3.3 explains how we inject attribute knowledge into KG-FLIP. ## 3.1 Flip Architecture We use the BLIP (Li et al., 2022) architecture as our backbone model, which is now one of the state-ofthe-art general-purpose VLP models. We choose BLIP for the following reasons: 1) instead of using a pre-trained object detector as the image encoder, BLIP uses ViT (Dosovitskiy et al., 2020), which is more computing-friendly and eliminates the need for bounding box annotations; 2) BLIP has a specially added text decoder - thus can be utilized for both VL understanding (e.g., multi-modal attribute classification) and VL generation (e.g., image captioning) downstream tasks in e-commerce; 3) training a VLP model from scratch is time-consuming and expensive. Reusing a pre-trained checkpoint, which has been empirically demonstrated to be very effective, can conspicuously reduce the R&D time and expenses of our proposed KG-FLIP model. ![2_image_0.png](2_image_0.png) As illustrated in Figure 1, KG-FLIP contains an image encoder, an image-grounded text encoder, a multi-modal text encoder, and an image-grounded text decoder. Similar to BLIP, we craft the image encoder using the visual transformer (ViTB/16) (Dosovitskiy et al., 2020). The text encoders are built upon BERT (Devlin et al., 2018), but we insert an additional cross-attention layer, which helps to fuse visual and linguistic information, between the self-attention layer and the feed-forward layer of each block. The text decoder is similar to the text encoder, except that we replace the self-attention layers with the causal self-attention layers to autoregressively predict next tokens. In the following, we describe each of the components mentioned. Image encoder: encodes input images and maps them into visual information representations. Concretely, each input image is first segmented into patches, and then ViT takes these patches as input and encodes them into a sequence of embeddings. The embeddings carry all visual information perceived from the image, and are finally mapped into the key matrix (K) and value matrix (V) for computing the cross-attention scores with the text. Image-grounded text encoder: fuses the visual and linguistic information through the crossattention layer of each transformer block. Specifically, the output of the self-attention layer in each block, which carries linguistic information obtained from the text input, is mapped into the query matrix (Q). Then, in each cross-attention layer, we use the query matrix (Q) together with the key matrix (K) and the value matrix (V), both coming from the ViT, to produce the output. Multi-modal text encoder: has the same structure as the image-grounded text encoder. A special token is prepended to the beginning of the input text, and its output is used as the global representation of the fused visual and linguistic information. Image-grounded text decoder: which is employed for performing VL generation downstream tasks (e.g., captioning). The causal self-attention layers enable the decoder to generate text in an auto-regressive manner. Specifically, a special token [DEC] is used as the start signal, and then the module iteratively generates the next token based on generated or supervised tokens in previous steps, until it reaches the end-of-sequence token. We follow BLIP's design of parameter sharing between three branches to reduce model size with demonstrated performance gain. (Li et al., 2022) ## 3.2 Pre-Training Objectives KG-FLIP jointly optimizes three pre-training objectives: knowledge-guided masked language-image modeling (KG-MLIM), knowledge-guided imagetext matching (KG-ITM), and language modeling (LM). Similar to BLIP, we use two understandingbased pre-training objectives and one generationbased pre-training objective. These three objectives activate different functionalities while contributing to each other through hard-parameter sharing. We first describe the three pre-training objectives of the model without the knowledge guidance (KG): Masked Language-Image Modeling Loss (MLIM): is similar to MLM in pre-training language models (e.g., BERT), but it utilizes both the image and the contextual text to predict the masked tokens (Chen et al., 2022), which helps the model to learn cross-modal alignment at the token level instead of instance level as in ITC. Formally, the MLIM loss can be represented by, $${\mathcal{L}}_{m l i m}=-\operatorname*{\mathbb{E}}_{(I,T)\in{\mathcal{D}}}\mathbb{E}_{{\mathcal{M}}\subset T}\left[\sum_{t_{i}\in{\mathcal{M}}}\log p(t_{i}|I,{\hat{T}})\right],$$ where one uniformly samples an image I and its corresponding text T from the dataset D, masks a random token subset M from T, and predicts it given the image and the masked text Tˆ. Image-Text Matching Loss (ITM): aims to learn joint VL embeddings that effectively fuse the information from input image-text pairs. ITM facilitates the model to produce more effective and fine-grained VL representations by using these representations to judge whether image-text pairs are matched (positive pairs) or not matched (negative pairs). The ITM loss can be expressed as: $${\mathcal{L}}_{i t m}=-\operatorname*{\mathbb{E}}_{(I,T)\sim p_{s a m p}(I,T|{\mathcal{D}})}[\log p(y_{I,T}|I,T)]\;,$$ where p*samp* is a distribution that samples positive and negative training examples, yI,T ∈ {0, 1} represents whether the image I and the text T are matched, and log p(yi,T |*I, T*) is the output of the [ENC] token in multi-modal text encoder followed by a classification layer. Language Modeling Loss (LM): aims to autoregressively generate desired textual information given an image (e.g., for captioning) or an imagetext pair (e.g., for Visual Question Answering). It optimizes the loss, $${\mathcal{L}}_{l m}=-\operatorname*{\mathbb{E}}_{(I,T)\in{\mathcal{D}}}\left[\sum_{t_{i}\in T}\log p(t_{i}|I,T_{<i})\right],$$ where each token tiis predicted given the image I and all text tokens in T before position i. ## 3.3 Knowledge Guidance To facilitate KG-FLIP to fuse two modalities more effectively, we utilize attribute knowledge to guide MLIM and ITM objectives, as described below: Knowledge-guided MLIM (KG-MLIM): utilizes attribute information to guide MLIM by ameliorating the masking policy, as illustrated in Figure 2. The original policy of BERT (Devlin et al., 2018) uniformly chooses 15% of input tokens, of which 80% are replaced with a special masked token [MASK], 10% are replaced with a random textual token, and 10% remain unchanged. Rather ![3_image_0.png](3_image_0.png) than treating all tokens the same, masking product attribute words allows the VLP model to focus on learning salient product information and provide anchor points to align both modalities, thus producing more effective VL representations than the original 15% random masking policy. To this end, we propose to use knowledge (i.e., product attribute schema) to guide MLIM to mask significant attribute tokens rather than randomselected tokens. Concretely, we use the enumerated attribute values (e.g., "v-neck", "sleeveless") from the catalog system to identify significant words in the text that match our attribute value names. After that, we maintain an overall masking ratio of 15%, and if the number of detected significant attribute words exceeds 15%, we randomly select a subset of them to be masked. Otherwise, we randomly mask other tokens to fill up to 15%. In this way, we implement KG-MLIM, which enables VLP to focus on noteworthy attribute words. Knowledge-guided ITM (KG-ITM): leverages attribute knowledge to synthesize "harder" negative image-text pairs, letting KG-FLIP determine whether the image-text pairs are matched or not matched. Specifically, in the standard ITM objective, p*samp* typically utilizes the input image-text pairs in each batch as positive samples (Chen et al., 2022), and creates negative ones by replacing the image or text in each paired sample with randomly selected from other samples. The next step is to predict whether each image-text pair is matched. However, since images or text of different products are typically disparate, the negative samples are usually too facile to train the model effectively. Hence, we propose to leverage attribute knowledge to synthesize "harder" negative image-text pairs for the ITM loss. Similar to KG-MLIM, we use attribute values to search for salient attribute words in the text. If any attribute word in the text is detected, we synthesize a negative text string by replacing each identified word with another random attribute word from the same attribute class (e.g., "blue" → "red", "v-neck" → "crew neck"). Otherwise, if we do not spot any attribute word, we select a random text to construct the negative sample. Thus, these "more difficult" synthesized negative samples force KG-FLIP to produce more effective VL representations that capture subtle (i.e., finegrained) distinctions between samples. ## 4 Experiments 4.1 Experimental Setup We initialized all parameters with a BLIP checkpoint (Li et al., 2022), and then pre-trained KGFLIP using a dataset of 1.9M pairs of Amazon product images and product texts (title and bullet points) in the fashion domain (viz., dresses and shoes). To investigate the potential promise of KGFLIP, we tested KG-FLIP on two most common VL downstream tasks in e-commerce: we perform product attribute extraction on the Amazon product attribute dataset and product categorization on the Fashion-Gen dataset (Rostamzadeh et al., 2018), which we describe in detail below. The Amazon product attribute dataset: contains a sample of products in our pre-training datasets that also have corresponding attribute values in the catalog. We further annotated another 600 image-title pairs as the validation and test set, which are used for hyper-parameter tuning and performance evaluation, respectively. The Fashion-Gen dataset*: incorporates 293,008 fashion data pairs. The dataset contains 48 main categories (e.g., "Dresses", "Jeans") and 121 sub-categories (e.g., "Short Dresses", "Leather Jackets"). We tested KG-FLIP by performing the sub-category classification based on visual and linguistic modalities. In our experiments, we use the same training and testing data as used in KaleidoBERT (Zhuge et al., 2021) and CMA-CLIP (Liu et al., 2021). The numbers of training and testing *Note that the Fashion-Gen dataset was only used to benchmark and illustrate the advance we made. It was not involved in building or optimizing our deployed model. ## 4.2 Results Product attribute extraction: The attributeextraction task aims to automatically infer product attribute information (e.g., color, neck style) from product images and textual information such as title and description. Following (Liu et al., 2021), we formulate this problem as a multi-task classification task. We add a multi-layer perception (MLP) head for each attribute in Table 1 on top of the [ENC] output embedding from the multi-modal text encoder and fine-tune them simultaneously. We compare the results with CMA-CLIP, BLIP, and an unguided version of KG-FLIP, which was pre-trained with standard MLIM and ITM without knowledge guidance. All models are pre-trained and fine-tuned on the same datasets. Table 1 below shows the recall at 90% precision (R@90P) on the test set. Table 1: Recall at 90% precision on the Amazon product attribute dataset. (attribute names are anonymized for compliance reasons) | Attribute | CMACLIP | BLIP | unguided KG-FLIP | | |-------------------|-------|--------|--------------------|-------| | dress attribute 1 | 29.1 | 53.1 | 52.1 | 57.3 | | dress attribute 2 | 42.3 | 41.0 | 52.6 | 48.7 | | dress attribute 3 | 57.3 | 61.1 | 65.9 | 67.9 | | dress attribute 4 | 33 | 36.7 | 44.1 | 42.1 | | dress attribute 5 | 71.5 | 65.1 | 71.8 | 74.1 | | shoe attribute 1 | 89.2 | 94.6 | 92.7 | 94.1 | | shoe attribute 2 | 90.0 | 92.0 | 91.0 | 92.0 | | shoe attribute 3 | 78.5 | 85.2 | 82.8 | 85.6 | | shoe attribute 4 | 98.7 | 99.0 | 98.7 | 99.0 | | Average | 65.51 | 69.75 | 72.32 | 73.42 | Product categorization: The task of product categorization is to automatically determine the sub-category for each product given its image-text pair. Similarly, we also formulate this problem as a classification task and stack an MLP head on top. Each VLP model in Table 2 was fine-tuned on the Fashion-Gen training set, and we then reported the accuracy of the categorization on the test set. Overall, the results in Table 1 and Table 2 show that KG-FLIP outperforms all other VLP models on both datasets. For the Amazon product attribute dataset, KG-FLIP and unguided-FLIP offer performance gains of 3.67% and 2.57% in terms of R@90P, respectively, over BLIP. For the FashionGen dataset, KG-FLIP can outperform the current benchmark (i.e., FashionViL) and BLIP in terms of accuracy by 2.1% and 0.36%, respectively. In | Method | Accuracy | |----------------------------------|------------| | ImageBERT (Qi et al., 2020) | 80.11 | | FashionBERT (Gao et al., 2020) | 85.27 | | OSCAR (Li et al., 2020) | 84.23 | | KaleidoBERT (Zhuge et al., 2021) | 88.07 | | CMA-CLIP (Liu et al., 2021) | 93.60 | | FashionViL (Han et al., 2022) | 92.23 | | BLIP (Li et al., 2022) | 93.96 | | unguided KG-FLIP | 94.15 | | KG-FLIP | 94.32 | summary, KG-FLIP has demonstrated its eminent performance, which makes it a compelling VLP solution for partially semantically aligned real-world VL data in e-commerce scenarios. ## 5 Model Deployment Currently, we have successfully deployed our KGFLIP model in a real-world application to backfill missing product attributes in the e-commerce catalog. E-commerce websites curate their product information in their catalog system. In addition to unstructured information (e.g., product titles and descriptions), structured product attributes (e.g., color and size) play an essential role in various downstream applications, including search and recommendation. For example, customers can filter search results by product attribute values and quickly identify their desired products. However, missing product attribute values are common, given the large number of products offered on ecommerce websites. Improving the coverage of product attributes with high accuracy is critical to improving the customer experience and maintaining customer trust. In addition, complete and accurate product attribute information can also improve the performance of various downstream applications (e.g., alternative product recommendations). Compared to previous image-only and text-only models, KG-FLIP can infer product attributes from both modalities and increase precision and recall by large margins. Another advantage is that it can predict thousands of product attributes in a single model, which implies that model development and maintenance efforts are significantly reduced compared to single attribute models. However, training thousands of attributes in one model makes single-machine training infeasible because of the massive size of the training data. To overcome this challenge, we have developed our own distributed training infrastructure to support large-scale model Table 2: Accuracy (%) on the Fashion-Gen dataset. training. Our infrastructure leverages the power of AWS Batch† Multi-Node Parallel, and the DeepSpeed framework, which allows us to automatically launch, configure, and manage a cluster of GPU instances, and train our model on 100 million image-text pairs for 10 epochs within a week with twenty p3.16xlarge instances. We also automated the process of launching a distributed job with just one command, which enables any individual to conduct distributed training tasks on their own and accelerates the experiment speed by reducing 90% of manual efforts. The model deployment is through AWS SageMaker‡. We leveraged AWS Batch to perform large-scale batch mode inference to backfill billions of product-attribute pairs with high accuracy since mid-2022. ## 6 Conclusion In this paper, we introduced a knowledge-guided fashion-domain language-image pre-training framework for e-commerece, dubbed KG-FLIP. By utilizing the product attribute knowledge to guide MLIM and ITM pre-training objectives, our KG-FLIP model facilitates the vision-language pre-training and enhances the product representation learning for e-commerce data that are partially aligned while also containing complementary information. The evaluation results have demonstrated its prominent performance against other state-of-the-art benchmarks on both Amazon and Fashion-Gen datasets. The KG-FLIP model has been deployed in a real-world application and improved the customer shopping experience. ## 7 Limitations There are two main limitations to this study. First, because of the lack of downstream datasets, we did not evaluate KG-FLIP on other downstream VL tasks in e-commerce (e.g., substitute recommendation). Therefore, the robustness of the KG-FLIP model on other downstream tasks requires further investigation. Second, the experimental results empirically show that the proposed knowledge-guided pre-training objectives are more effective in producing VL representations that capture subtle distinctions between samples than the standard objectives. However, a theoretical analysis of the effectiveness of our knowledge-guidance strategies is lacking. ## 8 Ethics Statement We discuss ethical issues from these aspects: Intended Use. If the technology is functioning as intended, both sellers and customers of ecommence platforms could benefit from the KGFLIP model. KG-FLIP could help customers to quickly identify their desired products (e.g., by filtering search results by product attribute values). It could also help sellers by reducing their manual efforts when listing new products (e.g, the platforms can automatically recommend the attribute values). Failure modes. In case of failure, KG-FLIP might output inaccurate product attribute information. Such non-factual information may harm customers' shopping experience. For example, the substitute recommendation system, which may use the incorrect product information provided by KGFLIP, may recommend a non-desired product to our customers and hurt their shopping experience. ## References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*. Fei-Long Chen, Du-Zhen Zhang, Ming-Lun Han, XiuYi Chen, Jing Shi, Shuang Xu, and Bo Xu. 2023. Vlp: A survey on vision-language pre-training. Machine Intelligence Research, 20(1):38–56. Feilong Chen, Duzhen Zhang, Minglun Han, Xiuyi Chen, Jing Shi, Shuang Xu, and Bo Xu. 2022. Vlp: A survey on vision-language pre-training. arXiv preprint arXiv:2202.09061. Kezhen Chen, Qiuyuan Huang, Yonatan Bisk, Daniel McDuff, and Jianfeng Gao. 2021. Kb-vlp: Knowledge based vision and language pretraining. In *Proceedings of the 38 th International Conference on* Machine Learning, PMLR 139, 2021. ICML, workshop, 2021. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX, pages 104– 120. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, et al. 2022. An empirical study of training end-to-end vision-and-language transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18166–18176. Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, and Hao Wang. 2020. Fashionbert: Text and image matching with adaptive loss for cross-modal retrieval. In *Proceedings of the 43rd* International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2251–2260. Xiao Han, Licheng Yu, Xiatian Zhu, Li Zhang, Yi-Zhe Song, and Tao Xiang. 2022. Fashionvil: Fashionfocused vision-and-language representation learning. arXiv preprint arXiv:2207.08150. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904–4916. PMLR. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In *International Conference on Machine Learning*, pages 5583–5594. PMLR. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *European Conference on Computer Vision*, pages 121–137. Springer. Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, and Yi Sun. 2021. Cma-clip: Cross-modality attention clip for image-text classification. *arXiv preprint* arXiv:2112.03562. Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. Imagebert: Cross-modal pre-training with large-scale weak-supervised imagetext data. *arXiv preprint arXiv:2001.07966*. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR. Negar Rostamzadeh, Seyedarian Hosseini, Thomas Boquet, Wojciech Stokowiec, Ying Zhang, Christian Jauvin, and Chris Pal. 2018. Fashion-gen: The generative fashion dataset and challenge. *arXiv preprint* arXiv:1806.08317. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. *arXiv preprint arXiv:1908.07490*. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text foundation models. *arXiv preprint arXiv:2205.01917*. Yushan Zhu, Huaixiao Zhao, Wen Zhang, Ganqiang Ye, Hui Chen, Ningyu Zhang, and Huajun Chen. 2021. Knowledge perceived multi-modal pretraining in ecommerce. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 2744–2752. Mingchen Zhuge, Dehong Gao, Deng-Ping Fan, Linbo Jin, Ben Chen, Haoming Zhou, Minghui Qiu, and Ling Shao. 2021. Kaleido-bert: Vision-language pre-training on fashion domain. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12647–12657.
kulkarni-etal-2023-domain
Domain-specific transformer models for query translation
https://aclanthology.org/2023.acl-industry.10
Due to the democratization of e-commerce, many product companies are listing their goods for online shopping. For periodic buying within a domain such as Grocery, consumers are generally inclined to buy certain brands of products. Due to a large non-English speaking population in India, we observe a significant percentage of code-mix Hinglish search queries e.g., sasta atta. An intuitive approach to dealing with code-mix queries is to train an encoder-decoder model to translate the query to English to perform the search. However, the problem becomes non-trivial when the brand names themselves have Hinglish names and possibly have a literal English translation. In such queries, only the context (non-brand name) Hinglish words needs to be translated. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct Grocery brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of popular Grocery brand names. Brand names are added as tokens to the model vocabulary, and the token embeddings are randomly initialized. Further, we introduce a Brand loss in training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warm-start the training from a public pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. The proposed model is generic and can be used with English as well as code-mix Hinglish queries alleviating the need for language detection. To reduce the latency of the model for the production deployment, we use knowledge distillation and quantization. Experimental evaluation indicates that the proposed approach improves translation results by preserving/correcting English/Hinglish brand names. After positive results with A/B testing, the model is currently deployed in production.
# Domain-Specific Transformer Models For Query Translation Mandar Kulkarni, Nikesh Garera, Anusua Trivedi Flipkart Data Science (mandar.kulkarni, nikesh.garera, anusua.trivedi)@flipkart.com ## Abstract Due to the democratization of e-commerce, many product companies are listing their goods for online shopping. For periodic buying within a domain such as Grocery, consumers are generally inclined to buy certain brands of products. Due to a large non-English speaking population in India, we observe a significant percentage of code-mix Hinglish search queries e.g., sasta atta. An intuitive approach to dealing with code-mix queries is to train an encoder-decoder model to translate the query to English to perform the search. However, the problem becomes non-trivial when the brand names themselves have Hinglish names and possibly have a literal English translation. In such queries, only the context (non-brand name) Hinglish words needs to be translated. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct Grocery brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of popular Grocery brand names. Brand names are added as tokens to the model vocabulary, and the token embeddings are randomly initialized. Further, we introduce a Brand loss in training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warmstart the training from a public pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. The proposed model is generic and can be used with English as well as code-mix Hinglish queries alleviating the need for language detection. To reduce the latency of the model for the production deployment, we use knowledge distillation and quantization. Experimental evaluation indicates that the proposed approach improves translation results by preserving/correcting English/Hinglish brand names. After positive results with A/B testing, the model is currently deployed in production. ## 1 Introduction Due to the democratization of e-commerce, online shopping has evolved in recent times, where most customers choose to shop online. As an effect, the majority of product companies are keen on making their products available for online shopping. When it comes to domains such as Grocery, where users have to shop periodically, they typically have a preference for buying products of certain brands. Hence, for Grocery, it was observed that a significant portion of search queries contain brand names. Due to a large non-Englishspeaking population in India, we observe a significant percentage of code-mix Hinglish search queries. A Hinglish query is where one or more Hindi words are written in English, e.g., sasta atta. Since there are no standard spellings, we observe a large variation in the Hinglish words. We also observe many queries where brand names are misspelled. An intuitive approach to deal with code-mix queries is to train an encoder-decoder model to translate the query to English and use an English search API to retrieve the products (Kulkarni et al., 2022). However, the problem becomes more challenging when the brand names themselves are Hinglish words and possibly have a valid English translation. We observe that in the Grocery domain, many brand names have Hinglish names, e.g. aashirvaad, gowardhan, veer, navratna etc. In such queries, only the context (non-brand name) Hinglish words need to be translated, and brand names (though Hinglish) must not be altered in the translation. E.g. for the query, 89 'sasta dabur lal tel', a literal translation would be 'cheap dabur red oil'. However, the expected translation is 'cheap dabur lal oil' since 'dabur lal' is a brand name. Although most of the words in the query are Hinglish, only the first and last words need to be translated. If a brand name gets altered during the translation, it will lead to non-ideal search results. In some cases, the query does not need a translation even though it contains a Hinglish brand name, e.g., veer brand oil. If an English/Hinglish brand name is misspelled, it needs to be corrected in the translation. In general, the seq2seq model should be able to handle the following scenarios. - the query has only English words with no spell errors: the model should output the query as it is - the query has only English words with spell errors in either brand names or context words: the model should only correct the spell errors - the query contains Hinglish words without brand names: the model should translate all Hinglish words to English - the query contains Hinglish words with brand names: the model should selectively translate the Hinglish words without altering brand names. It should correct the brand names if it is misspelled. To ensure such behavior, one would need large manually labeled data inclusive of many brand names. In this paper, we propose a simple yet effective modification to the transformer training to preserve/correct brand names in the output while selectively translating the context words. To achieve this, we use an additional dataset of high-demand Grocery brand names provided by the product team. First, to output brand names as a whole, we add them as tokens to the model vocabulary and randomly initialize the corresponding token embeddings. Further, we introduce a brand loss for training the translation model. Brand loss is a cross entropy loss computed using a denoising auto-encoder objective with brand name data. We warm-start the training from a generic pre-trained checkpoint (such as BART/T5) and further adapt it for query translation using the domain data. Results indicate that introducing brand loss significantly improves accuracy by preserving/correcting brand names in the translation. We also verify that introducing brand information as the loss is more effective than introducing it as the training data. The model is generic and can be used with English as well as code-mix Hinglish queries, alleviating the need for language detection. Further, to reduce the latency of the model for the production use-case, we use knowledge distillation and quantization. Using a large model as the teacher, we obtain pseudo-labels for a large set of unlabeled queries. We then train a small student opennmt (Klein et al., 2017) model on this dataset. We are able to achieve more than 28x reduction in the latency with a slight drop in accuracy. Experimental results demonstrate the efficacy of the proposed approach. ## 2 Related Works Transformers (Vaswani et al., 2017) is the current state-of-the-art model for translation. Large-scale self-supervised pre-training of encoder-decoder models followed by domainspecific fine-tuning can significantly improve the translation quality with a limited labeled set (Lewis et al., 2019) (Raffel et al., 2020). Search query translation is essential for Cross-Lingual Information Retrieval (CLIR). Bhattacharya et al. (Bhattacharya et al., 2016) use word vector emebedding and clustering to find groups of words representing the same concept from different languages. These multilingual word clusters are then used to perform query translation for CLIR between English, Hindi and Bengali. Kulkarni et al. (Kulkarni and Garera, 2022) proposes an approach to perform vernacular query translation without using any parallel corpus. Authors only utilize unlabeled query corpus from two languages, a pre-trained multilingual translation model, and train it with cross-language training to translate vernacular search queries to English. For code mix query translation, multilingual and English pre-trained encoder-decoder models have been explored (Jawahar et al., 2021) (Kulkarni et al., 2022). Kumar et al. (Kumar et al., 2020) explored statistical and neural ma- | Query | Ground truth | Without Brand loss | With Brand loss | |-------------------------|-------------------------|----------------------|-------------------------| | asribad ata | aashirvaad atta | ashirwad atta | aashirvaad atta | | dabber lal tel | dabur lal oil | dabber lal oil | dabur lal oil | | emni rice brand oil | emami rice bran oil | emni rice bran oil | emami rice bran oil | | daadis peanut khakra | daadi's peanut khakhra | grapes peanut seeds | daadi's peanut khakra | | goverdhan desi ghee | gowardhan desi ghee | goverdhan desi ghee | gowardhan desi ghee | | detol original | dettol original | original detol | dettol original | | farnely all product | farmley all product | free all product | farmley all product | | veer brand oil | veer brand oil | mustard oil | veer brand oil | | all out mosquito refill | all out mosquito refill | eri mosquito refill | all out mosquito refill | | ice cream kwality walls | ice cream kwality walls | ice cream kwality | ice cream kwality walls | Table 1: Effect of brand loss on accuracy. With the brand loss, the model preserves/corrects brand names and provides translation better aligned with the ground truth. Brand names are highlighted in boldface. chine translation models for generating natural language questions from a given keywordbased query. Few techniques have been explored to preserve some of the input tokens as it is in output. CopyNet (Gu et al., 2016) enables selective use of generate and copy mode. In the copy mode, an RNN-based model can choose sub-sequences from the input sequence to put them at appropriate places in the output sequence. While in generate mode, the model can generate new tokens. On similar lines, See et al. (See et al., 2017) proposed a hybrid pointer-generator network-based approach with an ability to copy words from input to the output while retaining the ability to produce novel words through the generator. In contrast to these approaches, we enforce the model to copy brand names using an additional loss component computed on the brand name data. The model still has a default generate ability which helps in correcting misspelled brand names. ## 3 Proposed Approach In the following sections, we provide details of the dataset and training methods. ## 3.1 Dataset We use a manually tagged dataset for training the model. We have a total of ~116k manually tagged query set, which contains Hinglish as well as English queries. To make use of previously tagged queries, the dataset consists of queries from Grocery and other domains such as fashion, mobile, footwear, etc. From this, we use randomly chosen 5k samples as the validation set and ~111k for the training. We use a list of 2226 high-demand Grocery brand names to compute the brand loss. The list was provided by the product team. As the test dataset, we use 10715 manually tagged queries from the Grocery domain. ## 3.2 Training Details For training the translation model, we make two modifications as follows. First, we add a list of high-demand brand names as tokens in the model vocabulary and randomly initialize the corresponding token embeddings. Brand names are converted to lowercase before adding to vocab. This ensures that when a brand name is outputted in the translation, it would be outputted as a single entity, avoiding incorrect brand name variations. We introduce a brand-specific loss in the model training. The translation model is trained with a combination of three loss components as follows. $$L=l_{S u p e r v i s e d}+l_{D a t a A u g}+\lambda\ l_{B r a n d}\qquad(1)$$ where λ indicates the weighting factor for the brand loss. l*Supervised* indicates the standard cross entropy loss with parallel corpus. l*DataAug* indicates the loss calculated with spell and auto-encoder data augmentations as described in section 3.3. For calculating l*Brand*, we use cross-entropy loss with denoising autoencoder objective with brand name data using simple CharDrop data augmentation. Since non-English speakers attempt to spell the words based on the phoneme sound of it, we noticed that typically the first and last character of the brand is spelled correctly while the spelling mistakes are present in the middle of the word. To emulate this, we randomly drop a character from 30-50% of the brand name words and use original brand names as the target. Following are some of the brand name training examples. | Noisy Brand name | Target | |--------------------|--------------| | asirwaad | ashirwaad | | milky freh | milky fresh | | dabur vaika | dabur vatika | Table 2: Brand name augmentations l*Brand* is computed with the teacher forcing technique. We set λ to 1 for all experiments. We also experimented by increasing and decreasing the value of λ, however, it did not lead to any significant change in the accuracy. We use a pre-trained BART-base model to warm-start the training and fine-tune it further on the manually tagged data. The model is fine-tuned using AdamW optimizer with a learning rate of 1e-5 and batch size of 16. The model is trained till the validation loss does not improve for three consecutive epochs. We use label smoothing (Vaswani et al., 2017) during the training, where we set the label smoothing parameter to 0.1 for all the experiments. We use beam search decoding during the inference, where the beam size is set to 3. The model has ~141M trainable parameters post adding the brand tokens. ## 3.3 Data Augmentations We experimented with Autoencoder and spell augmentation to compute data augmentation loss (l*DataAug*). For Autoencoder, we use target English text as the input and train the model to reconstruct it. Though simple, it has shown to be effective in query translation since it provides an advantage similar to a language model regularizer (Kulkarni et al., 2022). For the batch of labeled queries, we add spell augmentations to the source (Ma, 2019) and train the model with the same target. For each batch of queries, data augmentation is chosen randomly. ![3_image_0.png](3_image_0.png) Table 3: BLEU score comparison result ## 4 Results Table 3 shows the BLEU score comparison of different model settings on the test set. In the first experiment, we verify the effectiveness of additional brand loss during the training. We train the model with and without brand loss. From the BLEU score comparison, it can be seen that brand loss training provides good improvements in test accuracy. In table 1, we show the comparison of query translation results with and without brand loss. With the brand loss, the model corrects the brand names whenever it is entered wrongly (first 7 examples). It also preserves brand names better when it's entered correctly (last 3 examples). Overall, the model provides translations better aligned with the ground truth. ## 4.1 Using Brand Names As Data Intuitively, it's possible to input the brand name information as the parallel corpus, where we can add CharDrop augmentation to the brand names, and the original brand name can be used as the target. Hence, we wanted to verify the effectiveness of introducing brand information through the loss compared to inputting it through the training data. We created additional training data from the brand names with CharDrop augmentations and appended it to the original training set. We use 50 augmentations for each brand name. Table 5 shows the BLEU score comparison result. We notice that adding brand info as a loss is more effective than adding it as training data. This could be because, with the brand as loss, the model is able to translate context words more effectively. Table 4 shows the query translation comparison result. Note that brand as loss is better at correcting misspelled brand names while providing better translations of context words. | Query | Ground truth | Brand as data | Brand as loss | |-------------------------|-------------------------|--------------------------|-------------------------| | nabrtan tel | navratna oil | olive oil | navratna oil | | cubes spice masala | cubes spice masala | cake spice masala | cubes spice masala | | fitme ka face pauder | fit me face powder | face powder offitme | fit me face powder | | dabur gulab jal 1 litre | dabur gulab jal 1 litre | dabur rose water 1 litre | dabur gulab jal 1 litre | | colgeat charcol offer | colgate charcoal offer | coffee charcol offer | colgate charcoal offer | | boork bond taja tea | brooke bond taaza tea | boork bond tea | brooke bond taaza tea | | fiama soap all mox | fiama soap all mix | fiama soap all mox | fiama soap all mix | | kesar ka sabudana | kesar sabudana | saffron seeds | kesar sabudana | Table 4: Comparison result for inputting brand information as loss vs inputting through training data. Note that with the brand as a loss, context words are better translated. | Setting | BLEU | |--------------------|--------| | Brand info as data | 69.6 | | Brand info as loss | 70.9 | Table 5: BLEU score comparison for brand as loss vs brand as data ## 4.2 Comparison With T5 We compared the results of BART-base with T5-base and T5-small models under similar training settings, i.e., adding brand tokens to the vocab and training with brand loss. Table 6 shows the comparison result. We noticed that BART works significantly better as compared to T5. This could be because denoising training objectives such as brand loss and data augmentation are more aligned with the BART pre-training than T5. Hence, BART can provide good results with a limited labeled set, especially when brand token embeddings need to be learned from scratch. | Setting | BLEU | |-----------|--------| | T5-base | 59.8 | | T5-small | 57.2 | | BART-base | 70.9 | ![4_image_0.png](4_image_0.png) Table 6: Comparison with T5 model ## 4.3 Pre-Training On Large Query Since the search model would be witnessing large traffic and a variety of queries, we pretrain BART-base model on a large query parallel corpus to make it suitable for production use case. We collected a large Hindi (Devanagari) unlabeled query corpus from the internal database. Since our Hindi search model currently supports different verticals such as fashion, mobile, footwear, etc., we suspect only a small percentage of Grocery related queries in the dataset. The Hindi queries are detected using a simple script-based detection. If any of the characters in the query are from Devanagari unicode range, the query is termed Hindi. We then use an in-house Hindi to English query translation model to create a parallel corpus from the unlabeled set. Further, we use an in-house *transliteration* model to convert a Hindi query to a Hinglish query. This way, we obtained a ~38M Hinglish to English query parallel corpus for training. The model is trained using AdamW optimizer with a learning rate of 5e-6. We pre-trained the BART-base model on this large set and then finetuned on the manually tagged set in the same manner described in section 3.2. Table 7 shows the result of the experiment. Pre-training on the large set gives a significant boost to accuracy. To verify if brand loss based finetuning still complements the advantage provided by the pre-training, we finetuned the query pretrained model without the brand loss. It can be seen that training with brand loss boosts accuracy in addition to the pre-training. | Query-pretraining | Brand loss | BLEU | |---------------------|--------------|--------| | Included | Included | 73.1 | | Not included | Included | 70.9 | | Included | Not included | 71.5 | Table 7: Effect of large scale query pre-training ## 5 **Knowledge Distillation For Improved** Latency The search query translation models are userfacing and need to have low latency to support high throughput. Though the BART-base model with query pre-training and fine-tuning provided good accuracy on the test set, it was not sufficient for production deployment due to the latency constraints. We observed that the p95 latency of the BART-base model with PyTorch implementation was ~200 ms, which is not acceptable for the production use-case. To reduce the latency of the model, we use knowledge distillation with open-nmt (Klein et al., 2017) framework. Open-nmt provides a Ctranslate wrapper for faster inference, making it a good choice for low latency use-cases. Our approach is to train a small open-nmt student model using Grocery BART-base model as the teacher model. Since the student model resides in another programming framework, we use a pseudo-labeling approach to transfer knowledge from the teacher to the student. To create a parallel corpus for open-nmt model training, we obtain translation labels on ~38M query set using the teacher model. We then train the open-nmt model on this large parallel corpus and the manually tagged set. We use a single layer open-nmt model with a vocab size of 18k and a hidden dimension of 384. The model has ~23M trainable parameters. For open-nmt model as well, we add the brand name tokens to the vocab. We use weight quantization during model inference. Table 8 shows the BLEU score comparison result with the open-nmt student model. The student model provides more than 28x speed up for the inference with just a 0.2 drop in the BLEU score. The reason a single layer student model could be providing comparable results to the teacher model can be two-fold. First, search queries rarely have grammar and hence may not a deeper network for translation. Second, the teacher through pseudo labeling is providing cleaner and consistent labels for the student to learn from. We performed A/B testing of the open-nmt student model w.r.t. an earlier model which does not use brand loss. We observed 10 basis points (bps) improvement in search ClickThrough-Rate (CTR) and improved search conversion. The model is currently deployed in production and serves a large volume of queries. ![5_image_0.png](5_image_0.png) Table 8: Knowledge distillation with open-nmt ## 6 Conclusion In this paper, we proposed a simple yet effective approach for domain-specific query translation. For the grocery domain, it was noticed that a significant percentage of queries contained brand names due to user preferences and periodic buying. We also observed a significant percentage of code-mix Hinglish queries and queries with grammatical errors. Since some grocery brand names are themselves Hinglish words, we wanted a brand-aware query translation model. To better preserve brand names in translation, we added brand name tokens to the model vocab and introduced an additional brand loss in transformer training. The modification improved translation accuracy by depicting desired brand name preserving effect. To reduce the latency of the model for the production deployment, we used knowledge distillation with the open-nmt student. Using a large model as a teacher and with pseudo labeling, we trained a single layer open-nmt student model. We could obtain more than a 28x reduction in latency with a slight drop in accuracy. After positive results with A/B testing, the model was deployed in production. ## References Paheli Bhattacharya, Pawan Goyal, and Sudeshna Sarkar. 2016. Query translation for crosslanguage information retrieval using multilingual word clusters. In *Proceedings of the 6th* Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016), pages 152– 162, Osaka, Japan. The COLING 2016 Organizing Committee. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. Ganesh Jawahar, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, and Laks V. S. Lakshmanan. 2021. Exploring text-to-text transformers for english to hinglish machine translation with synthetic code-mixing. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In *Proceedings of ACL 2017, System* Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Mandar Kulkarni, Soumya Chennabasavaraj, and Nikesh Garera. 2022. Study of encoder-decoder architectures for code-mix search query translation. Mandar Kulkarni and Nikesh Garera. 2022. Vernacular search query translation with unsupervised domain adaptation. Adarsh Kumar, Sandipan Dandapat, and Sushil Chordia. 2020. Translating web search queries into natural language questions. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Edward Ma. 2019. Nlp augmentation. https://github.com/makcedward/nlpaug. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
kulkarni-etal-2023-label
Label efficient semi-supervised conversational intent classification
https://aclanthology.org/2023.acl-industry.11
To provide a convenient shopping experience and to answer user queries at scale, conversational platforms are essential for e-commerce. The user queries can be pre-purchase questions, such as product specifications and delivery time related, or post-purchase queries, such as exchange and return. A chatbot should be able to understand and answer a variety of such queries to help users with relevant information. One of the important modules in the chatbot is automated intent identification, i.e., understanding the user{'}s intention from the query text. Due to non-English speaking users interacting with the chatbot, we often get a significant percentage of code mix queries and queries with grammatical errors, which makes the problem more challenging. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. We use a small labeled corpus and relatively larger unlabeled query data to train a transformer model. For training the model with labeled data, we explore supervised MixUp data augmentation. To train with unlabeled data, we explore label consistency with dropout noise. We experiment with different pre-trained transformer architectures, such as BERT and sentence-BERT. Experimental results demonstrate that the proposed approach significantly improves over the supervised baseline, even with a limited labeled set. A variant of the model is currently deployed in production.
# Label Efficient Semi-Supervised Conversational Intent Classification Mandar Kulkarni, Kyung Kim, Nikesh Garera, Anusua Trivedi Flipkart Data Science (mandar.kulkarni, kyung.kim, nikesh.garera, anusua.trivedi)@flipkart.com ## Abstract To provide a convenient shopping experience and to answer user queries at scale, conversational platforms are essential for e-commerce. The user queries can be prepurchase questions, such as product specifications and delivery time related, or postpurchase queries, such as exchange and return. A chatbot should be able to understand and answer a variety of such queries to help users with relevant information. One of the important modules in the chatbot is automated intent identification, i.e., understanding the user's intention from the query text. Due to non-English speaking users interacting with the chatbot, we often get a significant percentage of code mix queries and queries with grammatical errors, which makes the problem more challenging. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. We use a small labeled corpus and relatively larger unlabeled query data to train a transformer model. For training the model with labeled data, we explore supervised MixUp data augmentation. To train with unlabeled data, we explore label consistency with dropout noise. We experiment with different pre-trained transformer architectures, such as BERT and sentence-BERT. Experimental results demonstrate that the proposed approach significantly improves over the supervised baseline, even with a limited labeled set. A variant of the model is currently deployed in production. ## 1 Introduction An automated conversational chatbot is essential to provide a seamless shopping experience and answer product-related questions at scale. An effective chatbot can assist and answer pre-purchase queries such as product specifications, offers, discounts, delivery time, and stock availability, as well as post-purchase queries such as exchange and return. Due to users from diverse backgrounds interacting with the chatbot and minimizing a human agent transfer, a chatbot should be able to understand and handle a variety of user queries. One of the important ML components in the chatbot is automated intent identification, i.e., understanding the user's intention from the query text. Post the correct intent identification, an appropriate dialog-flow can be initiated. An incorrect intent prediction negatively affects the dialog-flow and, hence the overall user experience. Further, due to nonEnglish speakers interacting with the chatbot, we observe a significant percentage of codemix Hinglish queries ( 30%) and queries with grammatical errors, making intent detection even more challenging. Training a supervised intent classification model under such a scenario would require a large amount of manually tagged data. However, due to internetscale operations, we have unlabeled query data available in a relatively large volume. This paper proposes a simple yet competent Semi-Supervised Learning (SSL) approach for label-efficient intent classification. SSL has been proven effective in leveraging unlabeled data when only a small labeled set is available. Specifically, we train a transformer BERT model on a small labeled corpus along with a larger unlabeled query data. Starting with limited labeled queries, we explore supervised as well as unsupervised data augmentation techniques. For the supervised data augmentation, we explore MixUp (Zhang and Vaidya, 2021) and simple label preserving NLP augmentations (Ma, 2019). For training with unlabeled data, typically, SSL algorithms rely on an extra smoothness constraint which enforces the 96 model to make consistent predictions on an unlabeled sample and its slightly perturbed version. Moreover, it is observed that the type of noise/perturbation plays an important role and a trivial noise may not provide desired improvements (Xie et al., 2020). Recently, a simple noise such as dropout has shown promising results for contrastive learning (Gao et al., 2021). We explore label consistency loss with dropout noise to train the BERT model with unlabeled data. The model is trained with the linear combination of supervised and unsupervised loss components. One of the challenges with a limited labeled set is how to halt the training when the validation set is not available; otherwise, it may result in over-fitting. In our experiments, we perform the model updates till the *training loss* is converged. Interestingly, training with dropout label consistency loss is less prone to over-fitting even with no validation set. We also noticed that the choice of label consistency loss has a prominent effect on the accuracy. For warm starting the training, we experiment with pre-trained BERT and sentence-BERT architectures. Experimental results demonstrate that, over the supervised baseline, the intent classification accuracy can be boosted significantly with the proposed semi-supervised approach. ## 2 Related Works SSL approaches have been extensively studied in the literature. Instead of providing an extensive list of references, we only cite a few relevant prior works in this section. An extensive survey can be found in (Yang et al., 2021). Unsupervised Data Augmentation (UDA) (Xie et al., 2020) has shown promising results for learning with unlabeled data along with a small labeled corpus. The idea is to enforce label consistency between two augmentations of the unlabeled sample. The authors also point out that the type of augmentation used significantly affects the accuracy of the model, and a trivial augmentation (such as adding Gaussian noise) may not lead to desired improvements. Recently, a contrastive learning approach that uses dropout noise has been shown to work well for self-supervised learning with textual data (Gao et al., 2021). Since dropout is inherently present in pre-trained transformer models, this provides a simple yet efficient method for data augmentation. Interpolation Consistency Training (ICT) (Verma et al., 2022) is a computationally efficient approach to train the model with SSL. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. For classification problems, ICT moves the decision boundary to low density regions of the data distribution. For the supervised classification, MixUp has been found to be an effective data augmentation technique (Jindal et al., 2020). MixUp is performed in the representation space for the text classification with transformers and is known to provide better regularization, and model calibration (Sun et al., 2020). ## 3 Proposed Approach In this section, we describe details of the dataset, loss functions experimented with, and model training. ## 3.1 Dataset Our intent classification dataset consists of queries from the pre-defined set of 28 intents. The queries consist of pre-purchase as well as post-purchase user questions. For each intent, we have 250 manually labeled samples; hence, the train set comprises 7k labeled examples. As the test set, we use a manually tagged dataset of 7569 samples. Table 1 shows examples of the queries from the test set and corresponding ground truth intents. Note that the test set consists of code-mix Hinglish queries and queries with grammatical errors. For the unlabeled data, we use a query corpus of size ~925k obtained from the internal database. For all the queries (labeled and unlabeled), we convert them to lowercase and remove punctuation (if any). We do not apply any further pre-processing. ## 3.2 Loss Functions Experimented We experiment with the following loss functions and their linear combination to train the model. 3.2.1 Supervised cross-entropy loss (ls) For a small set of labeled data, we use the standard supervised cross entropy loss for the | Samples | intent class | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| | when will it be delivered if i order today this product satrday give me sir mujhe ye phone kab tak mile ga | delivery_time | | when will we get discount | | | it was 11000 near about 12000 at a time when it was offer phone ka price kab kem hoga | offers_and_discounts | | is there debit card emi available | | | emi process not full details show it option sorry sir card payment kaise karna hai | payment_options | | is this boot washable | | | sir this phone is good or but sir this phone prosser display kaise h ise mobile ki | product_spec | | how much amount i will get into exchange of my mobile high what if the mobile i am replesing can be switched on mobile ka screen touch kharab hai exchange ho jaega | product_exchange | | how to return my order | | | my parking sensor not yet delvered | | | humko black colour mila hai grey ke jagah | post_purchase | Table 1: Example queries and intent labels from the test dataset. Note that the test data contains code-mix Hinglish queries and queries with grammatical errors. training. We use label smoothing while training where the smoothing parameter is set to 0.1. This loss function is included in all the experiments. ## 3.2.2 Supervised Grammar Loss (Lsg) For the batch of labeled data, we add grammar augmentations to the input queries, such as spell errors and word swaps, to create additional train data (Ma, 2019). We use cross entropy loss and label smoothing for this. ## 3.2.3 Supervised Mixup Loss (Lsm) The idea behind supervised MixUp is to create an additional labeled train set through linear interpolating of the features and corresponding one-hot labels. For the transformer models, MixUp is performed on the feature representations of the queries in the following manner. $$\begin{array}{l}{{\tilde{x}=\lambda\;x_{i}+\left(1-\lambda\right)\,x_{j}}}\\ {{\tilde{y}=\lambda\;y_{i}+\left(1-\lambda\right)\,y_{j}}}\end{array}\qquad\qquad(1)$$ Here, λ ∼ U(0, 1). xi and xjindicates the features from last hidden layer. We use cross entropy loss for this. ## 3.2.4 Unsupervised Dropout Loss (Lud) We use dropout noise for enforcing prediction label consistency to train the transformer model on unlabeled data. We sample a batch of queries from the unlabeled query corpus and make two independent forward passes through the transformer to obtain two label predictions. The label consistency loss is then calculated to minimize the distance measure D between these predictions. $$l_{u d}=\mathbb{E}_{u\sim U(x)}\;\;D(p_{\theta}(y_{1}|u),p_{\theta}(y_{2}|u))$$ $\iota$). Here, y1 and y2 indicate predicted labels for an unlabeled batch u. For D, we experimented with Cross Entropy (CE) and Mean-SquareError (MSE) loss. For text classification, UDA uses round-trip back-translation as the data augmentation (Xie et al., 2020). They keep one copy of the network weights fixed while updating another copy. For the dropout, label predictions are calculated with the current network parameters, and the same is updated during training. ## 3.3 Training Details For the pre-trained BERT model, we use *bertbase-uncased* while for the pre-trained sentence- (a) BERT results (b) Sentence-BERT results ![3_image_0.png](3_image_0.png) BERT model, we use *paraphrase-mpnet-v2*. Both bert-base-uncased and *paraphrase-mpnet-v2* are 12 layers models with ~109M trainable parameters. For the BERT model, we use a feature corresponding to the *[CLS]* token from the last hidden layer (without tanh activation) as the query representation. For the sentence-BERT model, we use a mean-pooled representation of the token embeddings from the last hidden layer. The mean pooling uses an attention mask to avoid averaging representations from the padding tokens. For the supervised losses (ls, lsg, lsm), we use a batch size of 32, while for unsupervised loss (lud), we use a batch size of 96. We use AdamW optimizer with a constant learning rate of 1e-5. One major challenge with limited labeled sets is to halt the training without the validation set. In our experiments, we stop the training when the absolute difference in the train loss from the consecutive epochs remains below the threshold (ϵ) for a certain number of epochs (patience). In all our experiments, we use ϵ of 0.1 and patience of 5. The models are trained under three different settings. - Only with labeled loss, LS = ls - With labeled loss (LS) and supervised data augmentation loss, LSD = lsg + lsm - With labeled loss (LS), supervised data augmentation loss (LSD) and unsupervised dropout label consistency loss LUD = lud. We use log probabilities along with MSE loss for LUD and a weight factor α of 10 (to match the scales). Figure 1 shows the comparison results for ![3_image_1.png](3_image_1.png) BERT and sentence-BERT models for varying number of labeled samples. We make a few observations from these results. Sentence-BERT works better than BERT, especially with a low number of labeled samples. Our findings align with the recent work demonstrating the effectiveness of Sentence-BERT for few shot learning (Tunstall et al., 2022). Supervised data augmentations (grammer + mixup) provide only a slight advantage over purely supervised baseline (Figure 1 (b)). We suspect it is happening due to over-fitting because of a small labeled corpus and lack of validation set to stop the training. We validate this hypothesis with an additional experiment, using some validation data to halt the training. Results are provided in the ablation study section 5.1. Unsupervised label consistency with dropout noise and MSE loss provides a significant advantage over the supervised baseline. Interestingly, even though the models are updated till the train loss is converged, training with this loss provides better regularization and is less prone to over-fitting. We also observe that the choice of unsupervised loss has a prominent effect on the accuracy. Section 5.3 in the ablation study shows the comparison results with different loss functions for lud. Since Hinglish constitutes a significant percentage (30%) of queries, we specifically compared the performance of BERT and sentenceBERT models for Hinglish query classification. First, we detect Hinglish queries from the test set using an approach proposed in (Kulkarni et al., 2022) and calculate F1-score on these queries with the semi-supervised approach. ![4_image_0.png](4_image_0.png) Figure 2 demonstrates the result. We observe that sentence-BERT inherently provides better accuracies for Hinglish queries. We also compare the Expected Calibration Error (ECE) on the test set for the BERT and sentence-BERT models. For this, we use the prediction result for the model trained on all the labeled samples. Table 2 shows the result. sentence-BERT achieves better calibration as compared to the BERT model. ![4_image_2.png](4_image_2.png) Table 2: Comparison of Expected Calibration Error (ECE) ## 4 Comparison With Unsupervised Mixup Approach We compare the dropout label consistency approach with another SSL method: Unsupervised MixUp. Verma et al. (Verma et al., 2022) proposed a MixUp approach for training with unlabeled data. Feature MixUp is performed on the transformer representations for the two batches of unlabeled samples. For labels, MixUp on model predictions for the same unlabeled batches is used. We randomly sample two batches (u1, u2) from unlabeled queries and calculate their feature representation (x1, x2). The Unsupervised MixUp loss (lum) is then calculated as follows. $$\begin{array}{c}{{l_{u m}=\mathbb{E}_{u_{1},u_{2}\sim U(x)}\,D(f_{\theta}(M i x_{\lambda}(x_{1},x_{2})),}}\\ {{M i x_{\lambda}(f_{\theta^{\prime}}(x_{1}),f_{\theta^{\prime}}(x_{2})))}}\end{array}$$ $$({\mathfrak{I}})$$ As suggested in (Xie et al., 2020), for calculating the second term in the equation, we use a fixed copy (θ′) of the network, and the update is applied to the current copy of the weights (θ). At the end of each epoch, a fixed copy is replaced with the current weights. The model is trained with supervised losses and the Unsupervised MixUp loss. We use MSE loss and α of 10. Figure 3 indicates the comparison result. Despite being simple, dropout label consistency performs better than Unsupervised MixUp. This could be because, at the start of the training, the predictions from the models may not be accurate. Hence, the updates to the model with Unsupervised MixUp loss are computed against noisy labels. On the contrary, the dropout consistency loss only enforces the smoothing constraint on the label predictions. ![4_image_1.png](4_image_1.png) ## 5 Ablation Study In this section, we report ablation study results with different experimental settings. ## 5.1 Comparison Of With And Without Validation Loss Monitoring Since supervised MixUp provided only a slight improvement over the purely supervised baseline with sentence-BERT, we suspect that it is happening because of over-fitting since we do not have validation loss based stopping criteria during training. To confirm this, we conducted an additional experiment using a validation set (of size 8318) and halted the training when validation loss did not improve for five consecutive epochs. Figure 4 shows the F1-score comparison with and without validation monitoring. The plot indicates that the supervised MixUp, when trained with ![5_image_0.png](5_image_0.png) a low number of labeled samples and without validation monitoring, is prone to over-fitting. Hence, it alone might not lead to good improvements for the limited labeled scenario. ## 5.2 Choice Of Label Consistency Loss We observed that the choice of loss used for ![5_image_2.png](5_image_2.png) dropout label consistency has a prominent effect on the model accuracy. Figure 5 shows the comparison of CE and MSE loss. For CE loss, we use α of 1, while for the MSE loss, α is set to 10 (to match the scales). It can be seen that the MSE loss consistently outperforms the CE loss. ## 5.3 Effect Of Varying Dropout Probability To understand whether model dropout probability affects the accuracy, we performed an experiment where we trained a sentenceBERT model with varied dropout probability. Sentence-BERT has a default dropout probability of 0.1. In this experiment, we set the ![5_image_1.png](5_image_1.png) dropout value to a lower (0.05) and a higher (0.2) value and trained the model with supervised and dropout label consistency losses. Figure 6 shows the resulting plot. We observe that increasing or decreasing the dropout probability does not significantly affect the model accuracy. ## 6 Conclusion This paper proposes a simple yet competent semi-supervised learning approach for label-efficient conversational intent classification. We trained different transformer models with labeled as well as unlabeled data. We explored supervised MixUp data augmentation for training with labeled samples, while for training with unlabeled samples, we experimented with label consistency loss with dropout. The results demonstrated that classification accuracy could be improved significantly over the supervised baseline with the proposed semi-supervised approach. Specifically, sentence-BERT was observed to perform better with a small number of labeled samples and even with code-mix Hinglish queries. Even without validation loss monitoring, it was noticed that training with dropout label consistency is less prone to over-fitting. Through the ablation study, we studied the effect of the choice of label consistency loss and dropout probability on the accuracy. Experimental results demonstrated the efficacy of the proposed approach. A variant of the model is currently deployed in production. ## References Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. Amit Jindal, Dwaraknath Gnaneshwar, Ramit Sawhney, and Rajiv Ratn Shah. 2020. Leveraging bert with mixup for sentence classification (student abstract). In *AAAI*. Mandar Kulkarni, Soumya Chennabasavaraj, and Nikesh Garera. 2022. Study of encoder-decoder architectures for code-mix search query translation. Edward Ma. 2019. Nlp augmentation. https://github.com/makcedward/nlpaug. Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S. Yu, and Lifang He. 2020. Mixuptransformer: Dynamic data augmentation for nlp tasks. Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, and Oren Pereg. 2022. Efficient few-shot learning without prompts. Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, and David Lopez-Paz. 2022. Interpolation consistency training for semi-supervised learning. *Neural* Netw., 145(C):90–106. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. 2020. Unsupervised data augmentation for consistency training. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA. Curran Associates Inc. Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu. 2021. A survey on deep semisupervised learning. Wancong Zhang and Ieshan Vaidya. 2021. Mixup training leads to reduced overfitting and improved calibration for the transformer architecture.
shen-etal-2023-xpqa
x{PQA}: Cross-Lingual Product Question Answering in 12 Languages
https://aclanthology.org/2023.acl-industry.12
Product Question Answering (PQA) systems are key in e-commerce applications as they provide responses to customers{'} questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding non-English answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that in-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task, and that translation-based approaches are most effective for candidate ranking while multilingual finetuning works best for answer generation. Still, there remains a significant performance gap between the English and the cross-lingual test sets.
# Xpqa: Cross-Lingual Product Question Answering Across 12 Languages Xiaoyu Shen1, Akari Asai2, Bill Byrne1,3 **and Adrià de Gispert**1 1Amazon Alexa AI, 2Univeristy of Washington, 3University of Cambridge {gyouu, willbyrn, agispert}@amazon.com ## Abstract Product Question Answering (PQA) systems are key in e-commerce applications to provide responses to customers' questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages across 9 branches, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and (2) answer generation, to generate a natural-sounding nonEnglish answer based on the selected English candidate. We evaluate various approaches involving machine translation at runtime or offline, leveraging multilingual pre-trained LMs, and including or excluding xPQA training data. We find that (1) In-domain data is essential as cross-lingual rankers trained on other domains perform poorly on the PQA task; (2) Candidate ranking often prefers runtime-translation approaches while answer generation prefers multilingual approaches; (3) Translating offline to augment multilingual models helps candidate ranking mainly on languages with non-Latin scripts; and helps answer generation mainly on languages with Latin scripts. Still, there remains a significant performance gap between the English and the cross-lingual test sets.1 ## 1 Introduction Product question answering (PQA) is a key technology in e-commerce applications. Given a question about a product, a PQA system searches the product webpage and provides an instant answer, so that customers do not need to traverse the page by themselves or seek help from humans (Li et al., 1The xPQA dataset is released under https://github. com/amazon-science/contextual-product-qa/ for research purposes. ![0_image_0.png](0_image_0.png) Figure 1: Cross-lingual PQA: The user asks questions about a product in their language (such as Hindi), then the system searches for product information in English and generates an answer in the same language as the question. 2017; Carmel et al., 2018). In our globalized world, it is essential to enable this technology for customers from different backgrounds. However, existing research focuses predominantly on English and leaves aside other language users. One of the biggest obstacles is the lack of datasets, which prevents us from training, evaluating and developing non-English PQA systems. Despite the growing number of multilingual QA datasets, their main focus is on general domains such as Wikipedia, which generalize poorly when applied to the PQA task, as we show in our experiments. To address this, we present xPQA, the first largescale dataset for cross-lingual PQA enabling nonEnglish questions to be answered from English content. Most comprehensive product information is usually available in a majority language such as English. Therefore, searching for relevant information in English often has a better chance of finding an answer.2 This paper explores how to effectively train systems that retrieve information from English and generate answers in the question language to allow users to ask questions in any language. Fig 1 shows an example. Most existing multilingual QA datasets are created by translating English questions, introduc2Prior work (Asai et al., 2021b) also shows the effectiveness of using English data as a knowledge source for crosslingual QA. It is nevertheless helpful to also support searching in all languages, which we leave for future work. ing translation artifacts and discrepencies from native speakers' real information-seeking behaviors (Clark et al., 2020a). Instead, we collect questions from the original market places as written by native speakers, hire bilingual annotators to check the relevant product information and write the final answers in their target languages. This eliminates the need for translations and ensures that the information-seeking behaviors of native speakers are accurately represented. Based on the collected dataset, we report baseline results on two subtasks: (a) candidate ranking, which selects the best English candidate that contains the information to answer the non-English question; (b) answer generation, which generates a natural-sounding non-English answer to present to the user based on the selected English candidate. We find that applying a cross-lingual ranker trained on a Wikipedia-based QA dataset generalizes poorly to the product domain. The performance is even worse than training a multilingual ranker on the English in-domain data, suggesting that domain transferability is even more crucial than language transferability. The translation-based approach is the most effective for candidate ranking while the multilingual-finetuning works the best for answer generation. Nonetheless, on both tasks, there is a substantial gap between the English-based and cross-lingual performances. In the following, we first elaborate on the problem formulation for the cross-lingual PQA task (§2), then explain the xPQA data collection process (§3), and present experiment results (§5.2) and conclusions (§6). ## 2 Problem Formulation Task There are two important tasks for a crosslingual PQA system: *candidate ranking* and *answer generation*. In candidate ranking, given a question in a target language and a list of candidates in English, the ranker predicts a relevance score for every candidate and selects the top one. Candidate ranking is necessary because a given product webpage may contain hundreds of information pieces about the product, so as a practical matter we select the top candidate to use in generation. After getting the top candidate, an answer generator takes it as input together with the question and produces an answer in the question language. This step is crucial in order to deploy a user-friendly PQA system since the candidate is neither in the user language nor written specifically to answer the question. | Language | Branch | Script | Market | |-----------------|--------------|------------|----------| | German (DE) | Germanic | Latin | Germany | | Italian (IT) | Romance | Latin | Italy | | French (FR) | Romance | Latin | France | | Spanish (ES) | Romance | Latin | Spain | | Portuguese (PT) | Romance | Latin | Brazil | | Polish (PL) | Balto-Slavic | Latin | Poland | | Arabic (AR) | Semitic | Arabic | SA | | Hindi (HI) | Indo-Aryan | Devanagari | India | | Tamil (TA) | Dravidian | Tamil | India | | Chinese (ZH) | Sinitic | Chinese | China | | Japanese (JA) | Japonic | Kanji;Kana | Japan | | Korean (KO) | Han | Hangul | US | Scenario We consider two scenarios for both tasks: *zero-shot* and fine-tuned. *Zero-shot* assumes that we do not have any labeled data and must rely on transfer learning from the English-based PQA dataset 3. *Fine-tuned* assumes that we can further finetune models on a limited number of crosslingual PQA annotations. Both are realistic scenarios as annotations are usually more abundant in English than in other languages (Shen et al., 2023). In our experiments, we use ePQA as the Englishbased PQA dataset, which is an extension of the dataset in Shen et al. (2022a) with coverage and quality improvements. Details are in Appendix A. ## 3 Xpqa Dataset Collection To train and evaluate our two tasks, the xPQA dataset contains annotations for (1) questioncandidate relevance to label whether every candidate is relevant to the question or not, and (2) answers where a natural-sounding answer is manually written if the candidate contains enough information to address the question. The collection process follows the steps below: 1. Question Collection For our question set, we crawl publicly-available community questions from Amazon.com product pages in 11 markets, obtaining questions in 12 different languages. For each language, we choose the corresponding market, then sample 2,500 unique questions. From these sampled questions, we select 1500 questions for each language that are manually verified by our annotators as being in the target language, information seeking, and containing no offensive content. 2. Candidate Collection For every valid question, we link its corresponding product page in the 3Another option is transfer learning from cross-lingual datasets in other domains, as we evaluate later. US market (except for Hindi and Tamil which directly use the India market) and extract all English candidates from product information sources (details in Appendix B.2). Then, we translate every question into English with AWS translate,4feed the translated question into an English-based ranker 5 and obtain top-5 candidates from its candidate set. 3. Relevance Annotation The top-5 English candidates and the non-English original questions are passed to annotators to judge their relevance. Each candidate is marked with one of three labels: "fully answering" (contains enough information to address the question), "partially answering" (contains useful information to partially address the question), and "irrelevant" (does not provide any helpful information). Guidelines are available in Appendix B.3. 4. Answer Search To increase the answer coverage, questions for which none of the top-5 candidates are marked as "fully answering" are given to annotators who are asked to actively search for the answer on the Amazon product page. If they find candidates fully answering the question, these are included with the label "fully answering". 5. Answer Generation For candidates marked as "fully answering", annotators are then asked to write natural, direct answers based on them. All annotators are bilingual, hired through the centific platform 6. The constructed xPQA dataset is split into 1000/400/100 questions as the test/train/dev sets for each language. Table 1 shows all languages included in the xPQA dataset. The detailed annotation process, payment, and statistics are explained in Appendix B. ## 4 Approaches For each task, we experiment with three types of baseline approaches: translate-test, **translatetrain**, and **multilingual** (Hu et al., 2020). Fig 2 provides a summary of these approaches. Translate-test The essential idea here is to rely exclusively on English-centric models and datasets. In the zero-shot scenario, models are trained on the ePQA dataset. In the fine-tuned scenario, we must translate questions and answers in the xPQA 4https://aws.amazon.com/translate/ 5An ELECTRA (Clark et al., 2020c)-based binary classifier model pretrained on large amounts of pseudo labels plus human annotations optimized for the English ranking task. 6https://www.centific.com/ dataset into English as this is an English-centric model. This translated dataset, termed xPQA_MT is used to further fine-tune the zero-shot models. At runtime, we use an external machine translation model to translate the question into English and apply the ranker to select the best candidate. Afterwards, an English-based generator produces an answer in English, which is then post-translated to the target language. **Translate-test** is a common approach in industry as it uses well-trained English-based models and off-the-shelf translation tools without further modifications. However, such a pipelined process introduces runtime latency and can lead to error propagation if translation quality is not perfect. Translate-train In contrast to the above, here we apply all translation processes in training, or offline, so that no additional latency is added at runtime. In the zero-shot scenario, we machine-translate all questions and answers in the ePQA dataset into each of the 12 languages we consider. The resulting dataset, termed ePQA_MT, is used to train a multilingual model. In the fine-tuned scenario, we further finetune the model on the xPQA dataset. As the model is defined to be multilingual, it can directly take input questions in their original languages and output answers in the target languages without any translation process. Multilingual Finally, this approach is similar to the translate-train one in that both use multilingual models rather than an English-only model, but the difference is that the multilingual approach requires no translations at training time. In the zeroshot scenario, it trains a multilingual pretrained model directly on the English-only ePQA dataset and relies only on its own pretrained multilingual knowledge to adapt to other languages. In the finetuned scenario, we further fine-tune the model on the xPQA dataset. Note that this approach still requires runtime post-translation of the generated English answer into the target language. This is because we find that multilingual models can only generate English answers when trained only on English datasets. Although special decoding constraints could be use to restrict output vocabulary to that of the target language, zero-shot multilingual adaptation in generation tasks is still an open challenge (Chen et al., 2022; Zhang et al., 2022). It is worth mentioning that the three types of approaches can be combined. For example, we ![3_image_0.png](3_image_0.png) could follow the **translate-train** approach to train the candidate ranker and follow the **multilingual** approach to train the answer generator. Details of the model implementation are in Appendix C. ## 5 Experiment 5.1 Evaluation Although many QA works report end-to-end performances, we chose not to report them because (1) Most product questions, as well as the information sources such as reviews and customer answers, are subjective. The correctness of answers depends on the specific candidates for which there is no universal ground truth (McAuley and Yang, 2016); (2) Only providing answers grounded on references is a critical requirement for an online PQA deployment. When candidate ranking fails to provide suitable candidates in the first stage, even if the answer generator manages to make a good guess,7it is still considered a failure. Therefore, end-to-end evaluations are not suitable and the evaluation of answer generation has to be candidate-dependent. We evaluate the ranker with Precision of the top1 candidate, P@1, as the generated answer is based on the top-1 candidate. To remove the effects of language-specific answering ratios, we report P@1 scores only on the answerable questions where at least one candidate is marked as "fully answering". The generator is evaluated with the sacreBLEU score 8. The generations are produced and evaluated only from candidates marked as "fully answering" since otherwise, the ground truth is undefined. ## 5.2 Main Results Task 1: Candidate Ranking Table 2 shows P@1 of different candidate ranking approaches and their average scores. **Translate-test** performs the best, and its advantage is particularly prominent in the zero-shot scenario. In the fine-tuned scenario, however, the other two approaches can also perform similarly. The **translate-train** approach outperforms the **multilingual** approach mainly for languages that do not use Latin scripts. Even for lowresource languages, such as Tamil whose translation quality is far from satisfactory, translating the training corpus still helps the multilingual model adapt to the target language. This implies existing pre-trained multilingual models are already good at adapting to new languages with Latin scripts. Translating the training corpus is mainly helpful to adapt the model into new scripts (Lauscher et al., 2020). Fine-tuning an English BERT on the ePQA training set leads to a P@1 of 70.7% on the monolingual English test set, which is significantly higher than all other languages except Polish, suggesting scope for substantial improvement.9 Task 2: Answer Generation Table 3 shows the BLEU score of different answer generation approaches and their average scores. In the zero-shot scenario, the **translate-test** approach often performs the best on languages with non-Latin scripts and the **translate-train** approach performs the best 9Note that this does not mean the system works better for Polish than English. As the question set for each language is distinct and not comparable, it could be simply that the sampled Polish questions are easier than English ones. Model DE IT FR ES PT PL AR HI TA ZH JA KO AVG Zero-shot Scenario Translate-test **48.7 48.6 59.7 63.8** 56.9 **63.6 49.2 60.2 44.6 56.1 50.7** 48.9 **54.2** Multilingual 48.4 46.2 59.1 59.8 55.5 60.0 45.1 42.7 40.4 53.0 45.0 45.4 50.1 Translate-train 47.7 47.8 57.4 60.8 **57.0** 58.7 48.7 50.9 44.1 55.8 47.8 **49.8** 52.2 Fine-tuned Scenario Translate-test 51.7 **55.1 64.8 66.8 64.0** 68.0 **57.3 68.4** 50.0 61.9 **57.9** 60.2 **60.5** Multilingual **52.7** 53.5 **64.8** 65.7 63.5 70.6 54.7 67.6 49.0 60.3 51.6 57.8 59.3 Translate-train 52.1 54.0 63.4 67.1 62.1 **71.6** 55.1 67.4 **51.3 64.2** 54.7 **60.6** 60.3 Table 2: P@1 of candidate ranking for each language and the averaged score (AVG) on answerable questions in xPQA testset. Model DE IT FR ES PT PL AR HI TA ZH JA KO AVG Zero-shot Scenario Translate-test 7.0 17.1 14.3 11.5 19.4 11.7 **18.5** 8.9 **5.1 19.8 12.9 8.5** 12.9 Multilingual 6.0 14.2 11.6 10.1 18.3 9.9 16.3 7.0 4.8 17.8 11.7 5.9 11.1 Translate-train **16.9 17.1 20.5 14.1 19.5 18.8** 15.9 **15.8** 4.4 16.6 12.8 7.4 **15.0** Fine-tuned Scenario Translate-test 8.9 25.3 15.4 14.2 21.0 16.6 **17.3** 16.3 7.1 21.7 12.3 8.8 15.4 Multilingual 27.2 27.1 22.5 31.3 20.0 32.3 13.4 26.0 16.7 26.2 31.6 44.0 26.5 Translate-train **32.9 31.6 26.6 36.6 24.4 40.1** 16.0 28.5 18.5 30.3 33.7 51.6 **30.9** ![4_image_0.png](4_image_0.png) on languages with Latin scripts. The **translatetrain** approach outperforms the **multilingual** approach with a few exceptions. Interestingly, all the exceptions happen in languages using non-Latin scripts, which contradicts the findings in candidate ranking. We hypothesize that the used pre-trained multilingual model is better at understanding nonLatin scripts than actually generating them because generating the text requires more advanced knowledge of grammar, which cannot be easily distilled from imperfect machine translators (Adelani et al., 2022). Fine-tuning models on the xPQA training data leads to big improvements across approaches, especially for **multilingual** and **translate-train** which do not rely on machine translators at runtime. The **translate-test** approach, due to the error propagation from two machine translation steps, significantly underperforms the other two. Finetuning an English T5 model on the ePQA training set leads to a BLEU score of 49.7%; although BLEU scores are related to language-specific tokenizers and questions, we believe this consistent gap implies large opportunities for improvement. ## 5.3 Analysis Domain vs Language Transferability There are cross-lingual QA datasets in other domains. When building a system for xPQA, is it better to use an English-only in-domain QA dataset or a crosslingual out-of-domain QA dataset? To answer this question, we train a new multilingual ranker on the XOR-TyDi dataset (Asai et al., 2021a), which is a representative cross-lingual QA dataset with real questions from the Wikipedia domain. We treat the gold passage containing the correct answer span as positive and randomly sample 5 other passages as negative. The comparison with our existing **multilingual** approach trained on the ePQA dataset is shown in Figure 3. We can see that fine-tuning models on the ePQA dataset leads to significantly better performance on all languages with few exceptions, suggesting *domain differences are even more crucial than language differences for the candidate* ![5_image_0.png](5_image_0.png) Language P@1 AUPC MAP MRR DE (MT) 48.7 61.8 64.4 66.4 DE (HT) 48.8 (↑0.1) 61.9 64.5 66.5 TA (MT) 44.6 59.7 60.0 62.3 TA (HT) 54.2 (↑9.6) 69.8 66.3 69.1 ranking task in xPQA. It is necessary to collect in-domain annotations for good performance. Answerability Prediction As the amount of information differs among products, it is very likely that many questions are not answerable with existing candidates and the model should not attempt to answer given the available information. A common practice is to use the model score as a predictor for the answerability confidence. To see how effective this is, we visualize the change of precision and recall with varying model score thresholds in Figure 4. We can see that in the zero-shot scenario, there is a larger performance variance across languages, especially for the **multilingual** approach which solely relies on the knowledge from the pretrained model. The **multilingual** approach is also more sensitive to the threshold and its recall drops much faster than the other two approaches. Finetuning the xQA training data reduces the gaps between the three approaches. The English model, as expected, consistently performs better, especially in the low-confidence region. Effects of Translation Quality To investigate the effects of the translation quality in the **translatetest** approach, we select German and Tamil as two | Pre-Translate | Rank | Generate | Post-Translate | |-----------------|--------|------------|------------------| | 74.1ms | 21.3ms | 532.4ms | 91.3ms | languages with very different translation qualities and obtain manual translations of their questions. Comparisons to machine-translated questions are shown in Table 4. Apart from P@1, we also show AUPC (Area Under Perturbation Curve), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) scores. We can see that the improvement from using human translations is negligible in German but substantial in Tamil. Even with human translations, we can still see a big gap between performances on English monolingual (70.7%) and xPQA test sets (48.8% and 54.2%), suggesting that question-shape shifts can be even a bigger challenge than language shifts for the candidate ranking task. The problem of language shifts might be crucial only for low-resource languages without decent MT systems such as Tamil. Runtime Latency Table 5 shows the runtime latency of every component tested in one AWS P3.16 instance. We feed questions in all languages one by one to simulate an online environment. As seen, the candidate ranker is fast and the computation over multiple candidates can be easily parallelized. The pre/post-translate costs more time, but the main bottleneck is the answer generation step, which is 25× slower than the ranking. This is clearly more than the latency budget of most online applications and can be the focus of future research. Potential improvements could be in non-autoregressive decoding, efficient attention, or distillation into a smaller model (Tang et al., 2021, 2022; Li et al., 2022). ## 6 Conclusion This paper presents xPQA, a dataset for crosslingual PQA supporting non-English questions to be answered from English product content. We report baseline results and findings for three approaches: translate-test, multilingual, and translatetrain. Experiments show that the translate-test approach performs the best for the candidate ranking task while the translate-train approach performs the best for the answer generation task. However, there remains significant room for improvement relative to an English-based monolingual PQA system. We hope that future research can benefit from our work to improve cross-lingual PQA systems. ## Limitations While the xPQA dataset is created to be as close to the real-world scenario as possible, it has two major drawbacks. Firstly, the candidate set in the dataset does not include the full candidates for a given product because annotating all candidates is prohibitively expensive. The subjectivity of product questions and candidates also makes it hard to get ground-truth short-span answers, which prevents a straightforward end-to-end evaluation over the full candidate set. A potential fix is to run human evaluations on the top-1 candidate over the full candidate set from each model, but it'd be costly to do so. A more realistic solution is to have an online evaluation for the best model only, which we leave for future work. Secondly, the answer annotation is based only on a single candidate because handling information from multiple candidates requires careful instructions on conflicting information and summarization skills. This might limit the model in answering complex questions that require inference over multiple candidates. However, we find this case to be very rare in real customer questions. Furthermore, as we do not summarize multiple candidates, the returned answer can be biased toward the opinion of a single customer. Our evaluation also has potential limitations in that (1) We did not extensively evaluate the quality of generated answers with manual annotation. It is known that BLEU scores might not correlate well with human evaluations on generation tasks, and they can be misleading in certain cases; (2) We only compared major types of baseline algorithms and did not explore the effects of leveraging existing larger, more powerful pre-trained language models such as mT0 (Muennighoff et al., 2022) and FlanT5 (Chung et al., 2022). Conclusions might change if we hire annotators to perform more human evaluations or change the model architecture. ## Ethics Statement E-commerce has been increasingly popular these years. Nonetheless, a big amount of people cannot benefit much from it because most E-commerce websites only support a few major languages. Deploying an xPQA system can have a broad impact across a wide range of non-English speakers to assist them in their shopping experience. With a welldeveloped xPQA system, we only need to maintain comprehensive product information in one majority language, but allow non-English speakers easily get access to the product information. This can significantly reduce the maintenance cost and benefit the democratization of AI. Nevertheless, there are two major caveats before deploying a safe, reliable xPQA system: (1) The answer generator needs to be fairly evaluated by humans in terms of faithfulness. While answer generation can greatly improve user-friendliness, it also brings potential risks of providing false information; (2) The users should be well noticed that the provided answer is drawn from the opinion of a single customer or other sources. It cannot reflect the opinion of the vendor, or seller nor imply any trend from the public. ## References David Adelani, Jesujoba Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, et al. 2022. A few thousand translations go a long way! leveraging pre-trained models for african news translation. In *Proceedings of the 2022 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3053–3070. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics. Akari Asai, Jungo Kasai, Jonathan Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021a. XOR QA: Cross-lingual open-retrieval question answering. In *Proceedings of the 2021 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 547–564, Online. Association for Computational Linguistics. Akari Asai, Jungo Kasai, Jonathan H Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021b. Xor qa: Cross-lingual open-retrieval question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 547–564. Gianni Barlacchi, Ivano Lauriola, Alessandro Moschitti, Marco Del Tredici, Xiaoyu Shen, Thuy Vu, Bill Byrne, and Adrià de Gispert. 2022. FocusQA: Opendomain question answering with a context in focus. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5195–5208, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. David Carmel, Liane Lewin-Eytan, and Yoelle Maarek. 2018. Product question answering using customer generated content-research challenges. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1349– 1350. Shiqian Chen, Chenliang Li, Feng Ji, Wei Zhou, and Haiqing Chen. 2019. Driven answer generation for product-related questions in e-commerce. In *Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining*, pages 411– 419. Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, and Lei Li. 2022. Mtg: A benchmark suite for multilingual text generation. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2508– 2527. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020b. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the* Association for Computational Linguistics, 8:454– 470. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020c. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Yang Deng, Yaliang Li, Wenxuan Zhang, Bolin Ding, and Wai Lam. 2022. Toward personalized answer generation in e-commerce via multi-perspective preference modeling. *ACM Transactions on Information* Systems (TOIS), 40(4):1–28. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, and Rui Yan. 2021. Meaningful answer generation of e-commerce question-answering. ACM Transactions on Information Systems (TOIS), 39(2):1– 26. Momchil Hardalov, Todor Mihaylov, Dimitrina Zlatkova, Yoan Dinkov, Ivan Koychev, and Preslav Nakov. 2020. EXAMS: A multi-subject high school examinations dataset for cross-lingual and multilingual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5427–5444, Online. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In *International Conference on Machine Learning*, pages 4411–4421. PMLR. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980. Tuan Lai, Trung Bui, Sheng Li, and Nedim Lipka. 2018. A simple end-to-end question answering model for product information. In Proceedings of the First Workshop on Economics and Natural Language Processing, pages 38–43. Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and ´ Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483–4499, Online. Association for Computational Linguistics. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315– 7330, Online. Association for Computational Linguistics. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017. Alime assist: An intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2495–2498. Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, and Dan Roth. 2022. Dq-bart: Efficient sequence-tosequence model via joint distillation and quantization. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 2: Short Papers), pages 203–211. Jiahua Liu, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2019. XQA: A cross-lingual open-domain question answering dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2358–2368, Florence, Italy. Association for Computational Linguistics. Shayne Longpre, Yi Lu, and Joachim Daiber. 2021. MKQA: A linguistically diverse benchmark for multilingual open domain question answering. *Transactions of the Association for Computational Linguistics*, 9:1389–1406. Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In *Proceedings of the 25th International Conference on World Wide Web*, pages 625–635. Samaneh Moghaddam and Martin Ester. 2011. Aqa: aspect-based opinion question answering. In *2011* IEEE 11th International Conference on Data Mining Workshops, pages 89–96. IEEE. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. *arXiv preprint* arXiv:2211.01786. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*, 21(1):5485–5551. Xiaoyu Shen, Gianni Barlacchi, Marco Del Tredici, Weiwei Cheng, Bill Byrne, and Adrià de Gispert. 2022a. Product answer generation from heterogeneous sources: A new benchmark and best practices. In *Proceedings of The Fifth Workshop on eCommerce and NLP (ECNLP 5)*, pages 99–110. Xiaoyu Shen, Gianni Barlacchi, Marco Del Tredici, Weiwei Cheng, and Adrià de Gispert. 2022b. semipqa: A study on product question answering over semistructured data. In *Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5)*, pages 111–120. Xiaoyu Shen, Svitlana Vakulenko, Marco Del Tredici, Gianni Barlacchi, Bill Byrne, and Adrià de Gispert. 2022c. Low-resource dense retrieval for opendomain question answering: A comprehensive survey. arXiv preprint arXiv:2208.03197. Xiaoyu Shen, Svitlana Vakulenko, Marco Del Tredici, Gianni Barlacchi, Bill Byrne, and Adrià de Gispert. 2023. Neural ranking with weak supervision for open-domain question answering: A survey. In *Findings of the Association for Computational Linguistics:* EACL 2023, pages 1691–1705. Ze Tang, Chuanyi Li, Jidong Ge, Xiaoyu Shen, Zheling Zhu, and Bin Luo. 2021. Ast-transformer: Encoding abstract syntax trees efficiently for code summarization. In *2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE)*, pages 1193–1195. IEEE. Ze Tang, Xiaoyu Shen, Chuanyi Li, Jidong Ge, Liguo Huang, Zhelin Zhu, and Bin Luo. 2022. Ast-trans: code summarization with efficient tree-structured attention. In Proceedings of the 44th International Conference on Software Engineering, pages 150– 162. Binxia Xu, Siyuan Qiu, Jie Zhang, Yafang Wang, Xiaoyu Shen, and Gerard de Melo. 2020. Data augmentation for multiclass utterance classification–a systematic study. In *Proceedings of the 28th international conference on computational linguistics*, pages 5494–5506. Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2019. Review conversational reading comprehension. arXiv preprint arXiv:1902.00821. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498. Jianxing Yu, Zheng-Jun Zha, and Tat-Seng Chua. 2012. Answering opinion questions on products by exploiting hierarchical organization of consumer reviews. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 391– 401. Qingyu Zhang, Xiaoyu Shen, Ernie Chang, Jidong Ge, and Pengke Chen. 2022. Mdia: A benchmark for multilingual dialogue generation in 46 languages. arXiv preprint arXiv:2208.13078. ## A Difference With Previous Datasets Product Question Answering Product question answering (PQA) differs from general-knowledge QAs in that questions often seek subjective opinions on specific products, so earlier research usually treated it as an opinion mining problem (Moghaddam and Ester, 2011; Yu et al., 2012). Recent advances in neural networks propagated the use of dense retrieval and generation models to provide direct answers. Many relevant datasets are curated to facilitate this study (Chen et al., 2019; Xu et al., 2020; Gao et al., 2021; Deng et al., 2022; Shen et al., 2022b,c). However, they are either based on simulated questions, or community questionanswers where the answers are noisy and have no direct connection with product information candidates (Lai et al., 2018; Xu et al., 2019; Barlacchi et al., 2022). The only exception is Shen et al. (2022a) where exact annotations are provided for both candidate relevance and answer generation, but it focuses only on one product category and the annotation quality is not good enough. Specifically, we sample about 2000 question-candidate pairs then perform an in-house annotation and find around 20% of the annotations are incorrect. As a result, we construct the ePQA dataset with the following main differences from the dataset in Shen et al. (2022a): (1) It has higher annotation quality with rounds of verifications. In our in-house annotation, the error rate is less than 5%; (2) It does not restrict the product categories, while the original dataset focuses only on the toys and games products; (3) It defines finer-grained 3-class labels for each candidate, while the original dataset contains only binary labels; (4) Every candidate is checked with its context (surrounding sentences) to make sure the label is correct. To the best of our knowledge, all existing PQA datasets are monolingual and questions are usually in high-resource languages such as English or Chinese, which leads to our motivation of building a cross-lingual PQA dataset. Cross-Lingual Question Answering Recently, many non-English question answering (QA) datasets in the general Wikipedia domain have been proposed (Lewis et al., 2020; Artetxe et al., 2020; Clark et al., 2020b; Hardalov et al., 2020). Several datasets focus on the *open-retrieval (open-domain)* setting, where a gold document or paragraph is not pre-given and a system needs to search documents to answer questions (Liu et al., 2019; Asai et al., 2021a; Longpre et al., 2021). Importantly, all of those prior datasets are created based on Wikipedia or school exams, and there is no prior work on cross-lingual product QA. Notably, *ePQA* contains 131,52/1,000/2,000 questions in the train/dev/test sets respectively, which is significantly larger than xPQA (as in realistic scenarios). It can be used to analyze the performance gap between mono-lingual PQA and cross-lingual PQA. ## B Dataset Collection B.1 Question Collection In the question collection phase, questions are kept if they fulfill the following criteria: (1) It is identified as the target language through Amazon Comprehend 10; (2) It contains no URL links; (3) It contains at most one question mark so as to avoid multiple questions; (4) It contains at least 3 words and less than 20 words; (5) Its corresponding product is also available in the US market. 11 ## B.2 Candidate Processing Our candidates come from 6 information sources: (1) product title, (2) semi-structured attributes, (3) product bullet points, (4) product description, (5) community answers (excluding the answer that directly replies to the question); (6) user reviews. Every product title and attribute is treated as a single candidate. For the other product information, we split them into sentences and treat each sentence as the candidate. For candidates from community answers, We further concatenate them with the corresponding community questions to provide more context. All candidates are lower cases and emojis are removed. Numbers from the semi-structured attributes are further normalized to keep at most 2 decimals. ## B.3 Relevance Annotation Each candidate is marked with one of three labels: "fully answering" (it contains enough information to address the question), "partially answering" (it contains useful information to partially address the question), and "irrelevant" (it's not useful in answering the question at all). To make sure the candidate is properly understood, we also provide its context (surrounding sentences) to the annotators. The exact definitions for the three labels and guidelines used are: - Fully answering. Meaning that the response contains clear information to tell about the answer. It can take some inference step to get the answer, but it must contain enough information to help come to the answer. - Partially answering (relevant but not fully answering). Meaning that the response contains ![10_image_0.png](10_image_0.png) useful information that help one understand more, and narrow down the range of the answer, yet not enough to get the exact answer from it. - Irrelevant. Meaning that the response does not provide useful relevant information at all, and a customer will not get anything new about their question after reading it. Note that in this step, annotators do NOT need to consider factual correctness. For the question "what color is it?", it does not matter if the response is saying it is blue or red. Annotators should focus on the content only but not the factual correctness. Besides, even if it contains other extra information or the response is not natural, as long as the proper information is included, then it is considered as fully answering. Specifically, Fully answering means the response contains enough information to let one draw the answer. The criteria of fully answering should NOT be overly strict. Annotators can be lenient with the word choice, as long as the response conveys the proper meaning. For example: Question: is it an awesome gift for my girl friend? Response: it is a nice valentine gift for your partner. In this case, the difference between "awesome" and "nice" is not relevant, as the response is either way saying that it is a good gift for your girl friend or partner, and thereby should be judged as "fully answering". Another example: Question: is it comfortable to sleep on for a 6" tall man? Response: It is comfortable to lie down for tall people. Annotators should not be overly strict about whether 6" can be considered as "tall" and whether "lie down" is equivalent to "sleep on", etc. Based on common sense, if the immediate impression after reading the response provides the needed information, one should NOT overthink other ways ![11_image_0.png](11_image_0.png) of interpreting this response. Helpful but not fully answering means the response contains helpful information, but is not enough to answer the question, or it can fully answers the question but the information is uncertain. "Helpful" means it provides useful information to help you know more about the question or narrow down the scope of the answer. For example: -question: Is it good for my 3year-old kid? -response: my 5-year-old son likes it. It cannot fully tell whether a 3-year-old will like it, but knowing that a 5-year-old likes it is helpful information. It helps you narrow down the range of the answer - You know it is for kids but not adults, just not sure if it works exactly for 3-year-old. "irrelevant" means the response provides zero useful information about the question, and is totally useless. Imagine you are a customer that raises this question, you should only select this option when you cannot find any useful information from the response. ## B.4 Answer Generation During the answer annotation, annotators are instructed to provide a natural, informative, and complete sentence to directly answer the user questions given the provided information in the response. The provided answer is required to be: - natural. It should be a fluent, natural-sounding sentence. - informative. It should provide key information or explanations for users to better under- stand the question. It cannot be a single word like "Yes" or "No" without further content. - complete. It should be a complete sentence that provides more context instead of a short span. There is also a caveat to avoid copying the candidate exactly. Annotators should always extract useful information from it and show the reasoning step in the answer to make it a natural reply. If the candidate is from a customer-provided content, they are further instructed to write from a thirdparty viewpoint. For user-provided contents, the answer will be in the form of "A customer says he feels ..." instead of "I feel ...". ## B.5 Quality Control And Annotation Cost Annotations are done through the centific platform 12. The whole annotation process is summarized in Figure 6. From the Home of the webapp, we can see the status of the task (how many hits have been done and how many hits remain to be annotated). In the Quality Assessment mode, the assessor could search and select annotators and then check the completed hits at any time. When the assessor checks the hits, they can correct them directly and give feedback to the annotators, to improve annotation quality. The annotation cost differs among languages and tasks. Table 6 provides a summary. 12https://www.centific.com/ | Task | DE | IT | FR | ES | PT | PL | AR | HI | TA | ZH | JA | KO | EN | |--------------------|------|------|------|------|------|------|------|------|------|------|------|------|------| | Zero-shot Scenario | | | | | | | | | | | | | | | Relevance Annotate | 0.30 | 0.22 | 0.30 | 0.22 | 0.22 | 0.24 | 0.22 | 0.13 | 0.19 | 0.09 | 0.32 | 0.38 | 0.18 | | Answer Generation | 0.27 | 0.20 | 0.27 | 0.19 | 0.20 | 0.24 | 0.20 | 0.12 | 0.19 | 0.10 | 0.29 | 0.38 | 0.24 | | Answer Search | 2.00 | 1.50 | 1.85 | 1.35 | 1.35 | 1.35 | 1.35 | 0.85 | 1.15 | 0.65 | 2.15 | 2.15 | - | | Translation | 1.05 | - | - | - | - | - | - | - | 0.55 | - | - | - | - | Table 6: Annotation cost per unit for each task (in US dollars). The answer search task for English questions is annotated in-house so there is no external cost. The translation annotation is only conducted for German and Tamil. Language Branch Script Market Train + Dev **Test** #Inst #Ans **#Inst #Ans %Full %Rel** English (EN) Germanic Latin US 131,520 24,928 20,142 4,392 84.1 95.2 German (DE) Germanic Latin Germany 5,110 806 10,201 1,504 73.4 86.8 Italian (IT) Romance Latin Italy 5,081 571 10,168 1,316 60.6 79.9 French (FR) Romance Latin France 5,047 838 10,135 1,684 71.1 96.7 Spanish (ES) Romance Latin Spain 5,055 1,003 10,112 1,961 78.5 91.5 Portuguese (PT) Romance Latin Brazil 5,064 896 10,120 1,775 78.9 98.4 Polish (PL) Balto-Slavic Latin Poland 5,053 925 10,101 1,873 76.7 90.1 Arabic (AR) Semitic Arabic SA 5,097 752 10,178 1,544 71.3 84.6 Hindi (HI) Indo-Aryan Devanagari India 5,175 922 10,319 1,670 91.7 95.3 Tamil (TA) Dravidian Tamil India 5,076 892 10,166 1,584 73.4 81.7 Chinese (ZH) Sinitic Chinese China 5,095 1,028 10,148 1,865 81.2 91.5 Japanese (JA) Japonic Kanji;Kana Japan 5,111 939 10,201 1,748 81.2 88.5 Korean (KO) Han Hangul US 5,060 642 10,116 1,277 59.6 70.5 ## B.6 Dataset Statistics To increase the number of negative samples, for every question we further randomly sample 5 candidates from the candidate set of corresponding products. These negative candidates, together with the annotated candidates, will form a closed-pool candidate set to evaluate the candidate ranker. Table 7 shows the statistics of the ePQA and xPQA datasets. ## C Experiments For the candidate ranking task, we initialize our model with Bert-base (Devlin et al., 2019) in translate-test and mBert-base in the other two approaches. Following the common practice, we concatenate the question and candidate (split by the <SEP> token) and then feed it into the encoder. An MLP layer is added on top of the first <CLS> token to output three logits. These logits go through the softmax layer to represent the probability of three labels. At runtime, we use the probability of "fully answering" as the score for each candidate. For the answer generation task, we initialize our model with T5-base (Raffel et al., 2020) for the translate-test approach and mT5-base (Xue et al., 2021) for the other two approaches. The input is the question concatenated with the candidate and the output is the ground-truth answer. At runtime, we generate the output with beam search (beam size as 5). Both the ranker and generator are trained with standard cross entropy loss. We implement all models based on the Huggingface Transformers library 13 with PyTorch 14. Models are optimized with the Adam optimizer (Kingma and Ba, 2014). We truncate the total input length to 128 subword tokens and select the learning rate from [5e-6, 1e-5, 3e-5, 5e-5, 1e-4]. The warm-up step is selected from [5%, 10%, 20%, 50%] of the whole training steps. For the ranker, we choose the best configuration based on the accuracy of the validation set. For the generative model, we choose the best configuration based on the perplexity of the validation set. In the end, we set the learning rate of the ranker as 3e-5 and that of the generator as 1e-5. The warm-up steps are set to 20% for both. The batch size is set as 64. We evaluate the model performance every 1% of the whole training step to select the best checkpoint. All models are trained on one AWS P3.16 instance which includes 8 Nvidia V100 GPUs. The random seed is set as 42. 13https://huggingface.co/ 14https://pytorch.org/
hu-etal-2023-learn
Learn over Past, Evolve for Future: Forecasting Temporal Trends for Fake News Detection
https://aclanthology.org/2023.acl-industry.13
Fake news detection has been a critical task for maintaining the health of the online news ecosystem. However, very few existing works consider the temporal shift issue caused by the rapidly-evolving nature of news data in practice, resulting in significant performance degradation when training on past data and testing on future data. In this paper, we observe that the appearances of news events on the same topic may display discernible patterns over time, and posit that such patterns can assist in selecting training instances that could make the model adapt better to future data. Specifically, we design an effective framework FTT (Forecasting Temporal Trends), which could forecast the temporal distribution patterns of news data and then guide the detector to fast adapt to future distribution. Experiments on the real-world temporally split dataset demonstrate the superiority of our proposed framework.
## Learn Over Past, Evolve For Future: Forecasting Temporal Trends For Fake News Detection Beizhe Hu1,2 Qiang Sheng1,2,∗ Juan Cao1,2 **Yongchun Zhu**1,2 Danding Wang1 Zhengjia Wang1,2 **Zhiwei Jin**3 1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3 ZhongKeRuijian Technology Co., Ltd. {hubeizhe21s,shengqiang18z,caojuan,zhuyongchun18s}@ict.ac.cn {wangdanding,wangzhengjia21b}@ict.ac.cn, jinzhiwei@ruijianai.com ## Abstract Fake news detection has been a critical task for maintaining the health of the online news ecosystem. However, very few existing works consider the temporal shift issue caused by the rapidly-evolving nature of news data in practice, resulting in significant performance degradation when training on past data and testing on future data. In this paper, we observe that the appearances of news events on the same topic may display discernible patterns over time, and posit that such patterns can assist in selecting training instances that could make the model adapt better to future data. Specifically, we design an effective framework FTT (Forecasting Temporal Trends), which could forecast the temporal distribution patterns of news data and then guide the detector to fast adapt to future distribution. Experiments on the real-world temporally split dataset demonstrate the superiority of our proposed framework. The code is available at https://github.com/ICTMCG/FTTACL23. ## 1 Introduction Automatic fake news detection, which aims at distinguishing inaccurate and intentionally misleading news items from others automatically, has been a critical task for maintaining the health of the online news ecosystem (Shu et al., 2017). As a complement to manual verification, automatic fake news detection enables efficient filtering of fake news items from a vast news pool. Such a technique has been employed by social media platforms like Twitter to remove COVID-19-related misleading information during the pandemic (Roth, 2022). Over the past decade, most fake news detection researchers have followed a conventional paradigm of collecting a fixed dataset and *randomly* dividing it into training and testing sets. However, the assumption that news data subsets are independent ∗Corresponding author. ![0_image_0.png](0_image_0.png) and identically distributed often does not hold true in real-world scenarios. In practice, a fake news detection model is trained on *offline data* collected up until the current time period but is required to detect fake news in newly arrived *online data* at the upcoming time period. Due to the rapidlyevolving nature of news, news distribution can vary with time, namely *temporal shift* (Du et al., 2021; Gaspers et al., 2022), leading to the distributional difference between offline and online data. Recent empirical studies (Zhang et al., 2021; Mu et al., 2023) evidence that fake news detection models suffer significant performance degradation when the dataset is temporally split. Therefore, the temporal shift issue has been a crucial obstacle to realworld fake news detection systems. The temporal shift scenario presents a more significant challenge than common *domain shift* sce116 narios. Most existing works on the domain shift in fake news detection focus on transfer among predefined news channels (e.g., politics) (Silva et al., 2021b; Mosallanezhad et al., 2022; Lin et al., 2022; Nan et al., 2022). However, consecutive data slices over time have various types of temporal dependencies and non-explicit distributional boundaries, making the temporal shift challenging. Moreover, these works assume the availability of target domain data, which is impossible for the temporal shift scenarios. Under such constraints, our aim is to train a model using presently available data to generalize to future online data (corresponding to temporal generalization task; Wang et al., 2022). Others that improve the generalizability to unseen domains learn domain-invariant features by adversarial learning (Wang et al., 2018) and domainspecific causal effect removal (Zhu et al., 2022a), but do not consider the characteristics of temporal patterns of news events. In this paper, we posit that the appearance of news events on the same topic presents diverse temporal patterns, which can assist in evaluating the importance of previous news items and boost the detection of fake news in the upcoming time period. In Figure 1, we exemplify this assumption using the statistics of news items on three topics in the Chinese Weibo dataset: Topic 1 presents the temporal pattern of *decrease*, where news about child trafficking becomes less frequent. Topic 2 presents the periodicity of news related to the college entrance exam which takes place annually in the second quarter (Q2).1In Topic 3, news items about falling accidents appear repeatedly and exhibit an *approximate stationary* pattern. Such temporal patterns indicate the different importance of news samples in the training set for detection in future quarters. For instance, instances of Topic 2 in the training set are particularly important for effectively training the detector to identify fake news in Q3. To this end, we propose to model the temporal distribution patterns and forecast the topic-wise distribution in the upcoming time period for better temporal generalization in fake news detection, where the forecasted result guides the detector to fast adapt to future distribution. Figure 2 illustrates our framework FTT (Forecasting Temporal Trends). We first map training data to vector space and perform clustering to discover topics. Then 1We denote the four quarters of a calendar year as Q1-Q4, respectively. For instance, Q1 stands for January through March. we model the temporal distribution and forecast the frequency of news items for each topic using a decomposable time series model. Based on the forecasts, we evaluate the importance of each item in the training data for the next time period by manipulating its weight in training loss. Our contributions are summarized as follows: - **Problem:** To the best of our knowledge, we are the first to incorporate the characteristics of topic-level temporal patterns for fake news detection. - **Method:** We propose a framework for Forecasting Temporal Trends (FTT) to tackle temporal generalization issue in fake news detection. - **Industrial Value:** We experimentally show that our FTT overall outperforms five compared methods while maintaining good compatibility with any neural network-based fake news detector. ## 2 Related Work Fake News Detection. Fake news detection is generally formulated as a binary classification task between real and fake news items. Research on this task could be roughly grouped into contentonly and social context-based methods. Contentonly methods take the news content as the input including texts (Sheng et al., 2021), images (Qi et al., 2019), and videos (Bu et al., 2023), and aim at finding common patterns in news appearances. In this paper, we focus on textual contents but our method could be generalized to other modalities. Previous text-based studies focus on sentiment and emotion (Ajao et al., 2019; Ghanem et al., 2021), writing style (Przybyla, 2020), discourse structure (Karimi and Tang, 2019), etc. Recent studies address the domain shift issues across news channels and propose multi-domain (Nan et al., 2021; Zhu et al., 2022b) and cross-domain (Nan et al., 2022; Lin et al., 2022) detection methods. Zhu et al. (2022a) design a causal learning framework to remove the non-generalizable entity signals. Social context-based methods leverage crowd feedbacks (Kochkina et al., 2018; Shu et al., 2019; Zhang et al., 2021), propagation patterns (Zhou and Zafarani, 2019; Silva et al., 2021a), and social networks (Nguyen et al., 2020; Min et al., 2022), which have to wait for the accumulation of such social contexts. Considering the in-time detection requirement, ![2_image_0.png](2_image_0.png) our proposed framework falls into the category of content-only methods, where we provide a new perspective for addressing the temporal generalization issue by forecasting temporal trends. Temporal Generalization. The temporal generalization issue presents a situation in that models are trained on past data but required to perform well on unavailable and distribution-shifted future data. It has been addressed in a variety of applications such as review classification (Huang and Paul, 2019), named entity recognition (Rijhwani and Preotiuc-Pietro, 2020), and air quality prediction (Du et al., 2021). Recently, Gaspers et al. (2022) explore several time-aware heuristic-based instance reweighting methods based on recency and seasonality for an industrial speech language understanding scenario. Our work follows this line of instance reweighting, but we attempt to model the temporal patterns and forecast topic-wise distribution to better adapt to future data. ## 3 Proposed Framework Our framework FTT is presented in Figure 2, where the instances from past consecutive time periods in the original training set are reweighted according to the forecasted topic-wise distribution for generalizing better in the upcoming time period. In the following, we first provide the problem formulation and subsequently, detail the procedures. ## 3.1 Problem Formulation Given a dataset D = {Dq} Q q=1 consisting of Q subsets that contain news items from Q consecutive time periods, respectively, our goal is to train a model on {Dq} Q−1 q=1 that generalizes well on DQ. In D, an instance is denoted as (xi, yi) where the ground-truth label yi = 1 if the content xiis fake. In practice, we retrain and redeploy the fake news detector at a fixed time interval to reflect the effects of the latest labeled data. We set the interval as three months (i.e., a quarter) since a shorter interval does not allow sufficient accumulation of newly labeled fake news items. In the following, we set Dq as the subset corresponding to news in a quarter of a calendar year. ## 3.2 Step 1: News Representation We first transform the news content into a vector space to obtain its representation, which will be used for similarity calculation in the subsequent clustering step. We employ SentenceBERT (Reimers and Gurevych, 2019), which is widely used for sentence representation (e.g.,Shaar et al., 2020). For instance xi, the representation vector is xi ∈ R 768. ## 3.3 Step 2: Topic Discovery We perform clustering on news items based on the representation obtained in Step 1 to group news items into distinct clusters which correspond to topics. Due to the lack of prior knowledge about the topic number, we adopt the single-pass incremental clustering algorithm which does not require a preset cluster number. We first empirically set a similarity threshold θsim to determine when to add a new cluster. When an item arrives, it is assigned to the existing cluster whose center is the nearest to it if the distance measured by cosine similarity is larger than θsim. Otherwise, it will be considered as an item on a new topic and thus be in a new independent cluster. ## 3.4 Step 3: Temporal Distribution Modeling And Forecasting Based on the clustering results, we model the temporal distribution of different news topics and forecast the topic-wise distribution in the upcoming time period in this step. Note that we do not consider the clusters with news items less than the threshold θ*count* since they are too small to present significant temporal patterns. Modeling. Assuming that T topics are preserved, we first count the number of news items per quarter within each topic. The counts of the same quarter are then normalized across topics to obtain the quarterly frequency sequence of each topic (denoted as f). To model the temporal distribution, we adopt a decomposable time series model (Harvey and Peters, 1990) on the quarterly sequences and consider the following two trends (exemplified using Topic i): 1) General Trend. A topic may increase, decrease, or have a small fluctuation in terms of a general non-periodic trend (e.g., Topics 1 and 3 in Figure 1). To fit the data points, we use a piecewise linear function: $$g_{i}(f_{i,q})=k_{i}f_{i,q}+m_{i},$$ where ki = k+a(q) T δ is the growth rate, fi,q is the frequency of Topic i in Quarter q, and mi = m + a(q) T γ is the offset. k and m are initial parameters. a(q) records the changepoints of growth rates and offsets while δ is the rate adjustment term and γ is a smoothing term. 2) Quarterly Trend. For topics having quarterly periodic trends like Topic 2 in Figure 1, we add four extra binary regressors corresponding to Q1~Q4 to inform the regression model the quarter that a data point in input sequence belongs to. For Topic i and Quarter q, we obtain the quarterly seasonality function si(fi,q) by summing the four regression models. Forecasting. We fit the model using the time series forecasting tool Prophet (Taylor and Letham, 2018) with the temporal distribution of topics from Quarter 1 to Quarter Q-1. To forecast the trend of Topic i in the upcoming Quarter Q, we sum up the two trend modeling functions: ## 3.5 Step 4: Forecast-Based Adaptation Based on the topic-wise forecasts of frequency distribution in Quarter Q, we apply instance reweighting to the training set and expect the model trained using the reweighted set would better adapt to the future data in Quarter Q. We first filter out topics that do not exhibit obvious regularity. Specifically, we remove the topics which have a mean absolute percentage error (MAPE) larger than a threshold θ*mape* during the regression fitting process. For a Topic i in the preserved set Q′, we calculate and then normalize the ratio between the forecasted frequency of Topic i pi(fi,Q) and the sum of all forecasted frequencies of the preserved topics: $$w_{i,Q}=\mathrm{Bound}\left({\frac{p_{i}(f_{i,Q})}{\sum_{i\in Q^{\prime}}p_{i}(f_{i,Q})}}\right),\qquad(3)$$ $$(1)$$ where Bound is a function to constrain the range of calculated weights. We set the weight smaller than θ*lower* and larger than θupper as θ*lower* and θ*upper*, respectively, to avoid the instability during the training process. For those that are not included in Q′, we set their weights as 1. The new weight of the training set instances of Topic i, wi,Q, corresponds to our forecasts of how frequent news items of this topic will emerge in the upcoming period Q. If the forecasted frequency of Topic i indicates a decreasing trend, the value will be smaller than 1 and thus instances of this topic will be down-weighted; conversely, if the forecasted distribution indicates an increasing trend, the value will be greater than 1 and the instances will be up-weighted. In the next step, we will show the reweighting process during training. ## 3.6 Step 5: Fake News Detector Training Our framework FTT could be compatible with any neural network-based fake news detector. Here, we exemplify how FTT helps detectors' training using a pretrained BERT model (Devlin et al., 2019). Specifically, given an instance xi, we concatenate the special token [CLS] and xi, and feed them into BERT. The average output representation of nonpadded tokens, denoted as oi, is then fed into a multi-layer perception (MLP) with a sigmoid activation function for final prediction: $${\hat{y}}_{i}=\operatorname{sigmoid}(\operatorname{MLP}(\mathbf{o}_{i})).$$ yˆi = sigmoid(MLP(oi)). (4) Our difference lies in using the new weights based on the forecasted temporal distribution to increase $$p_{i}(f_{i,Q})=g_{i}(f_{i,Q})+s_{i}(f_{i,Q}).\qquad(2)$$ or decrease the impact of instances during backpropagation. Unlike most cases that use an *average* cross-entropy loss, we minimize the *weighted* cross-entropy loss function during training: $${\mathcal{L}}=-{\frac{1}{N}}\sum_{i=1}^{N}w_{i,Q}{\mathrm{CrossEntropy}}(y_{i},{\hat{y}}_{i}),\tag{5}$$ where wi,Q is the new weight for instance xi and yiis its ground-truth label. N is the size of a minibatch of the training set. ## 4 Evaluation We conduct experiments to answer the following evaluation questions: - **EQ1:** Can FTT bring improvement to the fake news detection model in temporal generalization scenarios? - **EQ2:** How does FTT help with fake news detection models? ## 4.1 Dataset Our data comes from a large-scale Chinese fake news detection system, covering the time period from January 2016 to December 2020. To meet the practical requirements, the data was divided by quarters based on the timestamp. Unlike the existing academic datasets (Shu et al., 2020; Sheng et al., 2022), the dataset is severely imbalanced. To avoid instability during training, we randomly undersampled the subset of each quarter to achieve a ratio of 1:1 between fake and real news. Identical to the real-world setting, we adopt a *rolling training* experimental setup. If we train a model to generalize well in the time period Q, the training, validation, and testing sets would be {Di} Q−2 i=1 , DQ−1, and DQ, respectively. If the target is Q + 1, then the three subsets would be {Di} Q−1 i=1 , DQ, and DQ+1. Here we use the four quarterly datasets from 2020 as the testing sets and conduct experiments on the four sets separately. ## 4.2 Experimental Settings Compared Methods. We compared our proposed FTT with five existing methods (including the vanilla baseline model), in which the second one is to remove non-generalizable bias and the last three are to introduce heuristic rules for adapting to future data. - **Baseline** follows a normal training strategy where all training instances are equally weighted. - **EANN**T (Wang et al., 2018) is a model that enhances model generalization across events by introducing an auxiliary adversarial training task to prevent the model from learning eventrelated features. For fair comparison, we replaced the original TextCNN (Kim, 2014) with a trainable BERT as the textual feature extractor, and utilized publication year labels as the labels for the auxiliary task following Zhu et al., 2022a. We removed the image branch in EANN as here we focus on text-based fake news detection. - **Same Period Reweighting** increases the weights of all training instances from the same quarter as the target data. It models the seasonality in the time series data. - **Previous Period Reweighting** increases the weights of all training instances from the last quarter. It could capture the recency in the data distribution. - **Combined Reweighting** combines the two reweighting methods mentioned above. The last three methods are derived from (Gaspers et al., 2022). Implementation Details. We used a BERT model, hfl/chinese-bert-wwm-ext (Cui et al., 2021) implemented in HuggingFace's Transformer Package (Wolf et al., 2020) as the baseline fake news detection classifier. In the training process, we used the Adam optimizer (P. Kingma and Ba, 2015) with a learning rate of 2e-5 and adopted the early stop training strategy, and reported the testing performance of the best-performing model on the validation set. We employed grid search to find the optimal hyperparameters in each quarter for all methods. In Q1 and Q2, the optimal hyperparameters of FTT are θsim = 0.65, θ*count* = 30, θ*mape* = 0.8, θ*lower* = 0.3, and θ*upper* = 2.0; and in Q3 and Q4, they are θsim = 0.5, θ*count* = 30, θ*mape* = 2.0, θ*lower* = 0.3, and θ*upper* = 2.0. We report the accuracy, macro F1 (macF1), and the F1 score for real and fake classes (F1real and F1fake). ## 4.3 Performance Comparison (Eq1) Table 1 shows the overall and quarterly performance of the proposed framework and other methods. We observe that: | Prev. Period | Combined Reweighting | FTT (Ours) | | | | | |----------------|------------------------|--------------|--------|--------|--------|--------| | Reweighting | Reweighting | | | | | | | macF1 | 0.8344 | 0.8334 | 0.8297 | 0.8355 | 0.8312 | 0.8402 | | Accuracy | 0.8348 | 0.8348 | 0.8301 | 0.8359 | 0.8315 | 0.8409 | | F1fake | 0.8262 | 0.8181 | 0.8218 | 0.8274 | 0.8237 | 0.8295 | | F1real | 0.8425 | 0.8487 | 0.8377 | 0.8435 | 0.8387 | 0.8509 | | macF1 | 0.8940 | 0.8932 | 0.8900 | 0.9004 | 0.8964 | 0.9013 | | Accuracy | 0.8942 | 0.8934 | 0.8902 | 0.9006 | 0.8966 | 0.9014 | | F1fake | 0.8894 | 0.8887 | 0.8852 | 0.8953 | 0.8915 | 0.8981 | | F1real | 0.8986 | 0.8978 | 0.8949 | 0.9055 | 0.9013 | 0.9046 | | macF1 | 0.8771 | 0.8699 | 0.8753 | 0.8734 | 0.8697 | 0.8821 | | Accuracy | 0.8776 | 0.8707 | 0.8759 | 0.8741 | 0.8707 | 0.8827 | | F1fake | 0.8696 | 0.8593 | 0.8670 | 0.8640 | 0.8582 | 0.8743 | | F1real | 0.8846 | 0.8805 | 0.8836 | 0.8829 | 0.8812 | 0.8900 | | macF1 | 0.8464 | 0.8646 | 0.8464 | 0.8429 | 0.8412 | 0.8780 | | Accuracy | 0.8476 | 0.8647 | 0.8476 | 0.8442 | 0.8425 | 0.8784 | | F1fake | 0.8330 | 0.8602 | 0.8330 | 0.8286 | 0.8271 | 0.8707 | | F1real | 0.8598 | 0.8690 | 0.8598 | 0.8571 | 0.8553 | 0.8853 | | macF1 | 0.8630 | 0.8653 | 0.8604 | 0.8631 | 0.8596 | 0.8754 | | Accuracy | 0.8636 | 0.8659 | 0.8610 | 0.8637 | 0.8603 | 0.8759 | | F1fake | 0.8546 | 0.8566 | 0.8518 | 0.8538 | 0.8501 | 0.8682 | | F1real | 0.8714 | 0.8740 | 0.8690 | 0.8723 | 0.8691 | 0.8827 | 1) FTT outperforms the baseline and four other methods across all quarters in terms of most of the metrics (the only exception is F1real in Q2). These results demonstrate its effectiveness. 2) The average improvement of F1fake is larger than that of F1real, suggesting that our method helps more in capturing the uniqueness of fake news. We attribute this to the differences in temporal distribution fluctuation: fake news often focuses on specific topics, while real news generally covers more diverse ones. This makes the topic distribution of fake news more stable, which allows for better modeling of topic-wise distributions. 3) The three compared reweighting methods show inconsistent performances. In some situations, the performance is even lower than the baseline (e.g., Same Period Reweighting in Q1). We speculate that the failure is caused by the complexity of the news data. Considering the rapidlyevolving nature of news, single heuristic methods like recency and seasonality could not fast adapt to future news distribution. In contrast, our FTT performs topic-wise temporal distribution modeling and next-period forecasting and thus has a better adaption ability. | Subset of the test set | Metric | Baseline | FTT (Ours) | |--------------------------|----------|------------|--------------| | macF1 | 0.8425 | 0.8658 | | | Accuracy | 0.8589 | 0.8805 | | | Existing Topics | F1fake | 0.7997 | 0.8293 | | F1real | 0.8854 | 0.9023 | | | macF1 | 0.8728 | 0.8846 | | | Accuracy | 0.8729 | 0.8846 | | | New Topics | F1fake | 0.8730 | 0.8849 | | F1real | 0.8727 | 0.8843 | | ## 4.4 Result Analysis (Eq2) Statistical Analysis. To analyze how FTT improves fake news detection performance, we analyze the testing instances by recognizing their topics. Specifically, we run the single-pass incremental clustering algorithm used in Step 2 again on the testing instances based on the clusters on the training set. If a news item in the testing set could be clustered into an existing cluster, it will be recognized as an item of the existing topics; otherwise, it will be in a new topic. Based on the results, we show the breakdown of the performance on the testing set in Table 2. Compared with the baseline, our framework achieves performance improvements on both the Existing Topics and the New Topics subsets. This could be attributed to our reweighting ![6_image_0.png](6_image_0.png) strategy where we not only increase the weights of news items belonging to a topic of an increasing trend but also decrease the weights of those belonging to the fading topics. With such a design, the model will be more familiar with news items in existing topics and more generalizable to news items in new topics. Case Study. Figure 3 shows three cases from the testing set. According to the forecasted results of the frequencies of these topics in the testing time period, our framework assigns positive weights (greater than 1) to items in these topics. After training on the reweighted set, the detector flips its previously incorrect predictions. In Topic 1, the frequency of Big Tech-related news items demonstrated an increasing trend over time. FTT captures this pattern and provides a forecast close to the true value for the target quarter. In Topic 2, there is an explosive growth of Infectious Diseases-related news items in early 2020, followed by sustained high frequency in the subsequent quarters. FTT successfully captures this change. In contrast to the other two topics, the frequency of Medication Safety-related news items in Topic 3 exhibits both an overall increasing trend and a certain periodic pattern since 2019, which roughly follows a "smiling curve" from Q1 to Q4 in a single year. FTT effectively models both of these patterns and helps identify the importance of news items in this topic for the testing time period. ## 5 Conclusion And Future Work We studied temporal generalization in fake news detection where a model is trained with previous news data but required to generalize well on the upcoming news data. Based on the assumption that the appearance of news events on the same topic presents diverse temporal patterns, we designed a framework named FTT to capture such patterns and forecast the temporal trends at the topic level. The forecasts guided instance reweighting to improve the model's generalizability. Experiments demonstrate the superiority of our framework. In the future, we plan to mine more diverse temporal patterns to further improve fake news detection in real-world temporal scenarios. ## Limitations We identify the following limitations in our work: First, our FTT framework captures and models topic-level temporal patterns for forecasting temporal trends. Though the forecasts bring better temporal generalizability, FTT could hardly forecast the emergence of events in new topics. Second, FTT considers temporal patterns based on the topic-wise frequency sequences to identify patterns such as decrease, periodicity, and approximate stationery. There might be diverse patterns that could not be reflected by frequency sequences. Third, limited by the scarcity of the dataset that satisfies our evaluation requirements (consecutive time periods with a consistent data collection criterion), we only performed the experiments on a Chinese text-only dataset. Our method should be further examined on datasets of other languages and multi-modal ones. ## Acknowledgements The authors thank anonymous reviewers for their insightful comments. This work was supported by the National Natural Science Foundation of China (62203425), the Zhejiang Provincial Key Research and Development Program of China (2021C01164), the Project of Chinese Academy of Sciences (E141020), and the Innovation Funding from Institute of Computing Technology, Chinese Academy of Sciences (E161020). ## References Oluwaseun Ajao, Deepayan Bhowmik, and Shahrzad Zargari. 2019. Sentiment aware fake news detection on online social networks. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2507–2511. IEEE. Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, and Jintao Li. 2023. Combating online misinformation videos: Characterization, detection, and future directions. *arXiv preprint arXiv:2302.03242*. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for Chinese BERT. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. ACL. Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang. 2021. AdaRNN: Adaptive learning and forecasting for time series. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management, pages 402–411. ACM. Judith Gaspers, Anoop Kumar, Greg Ver Steeg, and Aram Galstyan. 2022. Temporal generalization for spoken language understanding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, pages 37–44. ACL. Bilal Ghanem, Simone Paolo Ponzetto, Paolo Rosso, and Francisco Rangel. 2021. FakeFlow: Fake news detection by modeling the flow of affective information. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 679–689. ACL. Andrew C Harvey and Simon Peters. 1990. Estimation procedures for structural time series models. *Journal* of forecasting, 9(2):89–108. Xiaolei Huang and Michael J. Paul. 2019. Neural temporality adaptation for document classification: Diachronic word embeddings and domain adaptation models. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4113–4123. ACL. Hamid Karimi and Jiliang Tang. 2019. Learning hierarchical discourse-level structure for fake news detection. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3432–3442. ACL. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751. ACL. Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. 2018. All-in-one: Multi-task learning for rumour verification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3402–3413. ACL. Hongzhan Lin, Jing Ma, Liangliang Chen, Zhiwei Yang, Mingfei Cheng, and Chen Guang. 2022. Detect rumors in microblog posts for low-resource domains via adversarial contrastive learning. In *Findings* of the Association for Computational Linguistics: NAACL 2022, pages 2543–2556. ACL. Erxue Min, Yu Rong, Yatao Bian, Tingyang Xu, Peilin Zhao, Junzhou Huang, and Sophia Ananiadou. 2022. Divide-and-conquer: Post-user interaction network for fake news detection on social media. In *Proceedings of the ACM Web Conference 2022*, pages 1148–1158. ACM. Ahmadreza Mosallanezhad, Mansooreh Karami, Kai Shu, Michelle V. Mancenido, and Huan Liu. 2022. Domain adaptive fake news detection via reinforcement learning. In *Proceedings of the ACM Web Conference 2022*, page 3632–3640. ACM. Yida Mu, Kalina Bontcheva, and Nikolaos Aletras. 2023. It's about time: Rethinking evaluation on rumor detection benchmarks using chronological splits. In Findings of the Association for Computational Linguistics: EACL 2023, pages 736–743. ACL. Qiong Nan, Juan Cao, Yongchun Zhu, Yanyan Wang, and Jintao Li. 2021. MDFEND: Multi-domain fake news detection. In *Proceedings of the 30th ACM* International Conference on Information and Knowledge Management. ACM. Qiong Nan, Danding Wang, Yongchun Zhu, Qiang Sheng, Yuhui Shi, Juan Cao, and Jintao Li. 2022. Improving fake news detection of influential domain via domain- and instance-level transfer. In *Proceedings* of the 29th International Conference on Computational Linguistics, pages 2834–2848. ICCL. Van-Hoang Nguyen, Kazunari Sugiyama, Preslav Nakov, and Min-Yen Kan. 2020. FANG: Leveraging social context for fake news detection using graph representation. In *Proceedings of the 29th ACM International Conference on Information and Knowledge* Management, pages 1165–1174. ACM. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In *International* Conference on Learning Representations. Piotr Przybyla. 2020. Capturing the style of fake news. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 490–497. AAAI Press. Peng Qi, Juan Cao, Tianyun Yang, Junbo Guo, and Jintao Li. 2019. Exploiting multi-domain visual information for fake news detection. In *2019 IEEE International Conference on Data Mining*, pages 518–527. IEEE. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese bertnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. ACL. Shruti Rijhwani and Daniel Preotiuc-Pietro. 2020. Temporally-informed analysis of named entity recognition. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 7605–7617. ACL. Yoel Roth. 2022. The vast majority of content we take action on for misinformation is identified proactively. https://twitter.com/yoyoel/status/ 1483094057471524867. Shaden Shaar, Nikolay Babulkov, Giovanni Da San Martino, and Preslav Nakov. 2020. That is a known lie: Detecting previously fact-checked claims. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 3607–3618. ACL. Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, and Yongchun Zhu. 2022. Zoom out and observe: News environment perception for fake news detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4543–4556. ACL. Qiang Sheng, Xueyao Zhang, Juan Cao, and Lei Zhong. 2021. Integrating pattern- and fact-based fake news detection via model preference learning. In *Proceedings of the 30th ACM International Conference on Information and Knowledge Management*, pages 1640– 1650. ACM. Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. dEFEND: Explainable Fake News Detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 395–405. ACM. Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. 2020. FakeNewsNet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. *Big data*, 8(3):171–188. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake news detection on social media: A data mining perspective. *ACM SIGKDD Explorations Newsletter*, 19(1):22–36. Amila Silva, Yi Han, Ling Luo, Shanika Karunasekera, and Christopher Leckie. 2021a. Propagation2Vec: Embedding partial propagation networks for explainable fake news early detection. *Information Processing & Management*, 58(5):102618. Amila Silva, Ling Luo, Shanika Karunasekera, and Christopher Leckie. 2021b. Embracing domain differences in fake news: Cross-domain fake news detection using multi-modal data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 557–565. AAAI Press. Sean J Taylor and Benjamin Letham. 2018. Forecasting at scale. *The American Statistician*, 72(1):37–45. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering. Yaqing Wang, Fenglong Ma, Zhiwei Jin, Ye Yuan, Guangxu Xun, Kishlay Jha, Lu Su, and Jing Gao. 2018. EANN: Event adversarial neural networks for multi-modal fake news detection. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 849– 857. ACM. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45. ACL. Xueyao Zhang, Juan Cao, Xirong Li, Qiang Sheng, Lei Zhong, and Kai Shu. 2021. Mining dual emotion for fake news detection. In Proceedings of the Web Conference 2021, pages 3465–3476. ACM. Xinyi Zhou and Reza Zafarani. 2019. Networkbased fake news detection: A pattern-driven approach. *ACM SIGKDD Explorations Newsletter*, 21(2):48–60. Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, and Fuzhen Zhuang. 2022a. Generalizing to the future: Mitigating entity bias in fake news detection. In *Proceedings of the 45th International ACM SIGIR Conference on Research and* Development in Information Retrieval, pages 2120– 2125. ACM. Yongchun Zhu, Qiang Sheng, Juan Cao, Qiong Nan, Kai Shu, Minghui Wu, Jindong Wang, and Fuzhen Zhuang. 2022b. Memory-guided multi-view multidomain fake news detection. IEEE Transactions on Knowledge and Data Engineering, pages 1–14.
ricatte-crisostomi-2023-aven
{AVEN}-{GR}: Attribute Value Extraction and Normalization using product {GR}aphs
https://aclanthology.org/2023.acl-industry.14
Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer query. Query Understanding (QU) systems are essential for this purpose, and many e-commerce providers are working on complex solutions that need to be data efficient and able to capture early emerging market trends. Query Attribute Understanding (QAU) is a sub-component of QU that involves extracting named attributes from user queries and linking them to existing e-commerce entities such as brand, material, color, etc. While extracting named entities from text has been extensively explored in the literature, QAU requires specific attention due to the nature of the queries, which are often short, noisy, ambiguous, and constantly evolving. This paper makes three contributions to QAU. First, we propose a novel end-to-end approach that jointly solves Named Entity Recognition (NER) and Entity Linking (NEL) and enables open-world reasoning for QAU. Second, we introduce a novel method for utilizing product graphs to enhance the representation of query entities. Finally, we present a new dataset constructed from public sources that can be used to evaluate the performance of future QAU systems.
# Aven-Gr: Attribute Value Extraction And Normalization Using Product Graphs Donato Crisostomi (∗) Amazon Sapienza University Of Rome crisostomi@di.uniroma1.it ## Abstract Getting a good understanding of the user intent is vital for e-commerce applications to surface the right product to a given customer query. Query Understanding (QU) systems are essential for this purpose, and many e-commerce providers are working on complex solutions that need to be data efficient and able to capture early emerging market trends. Query Attribute Understanding (QAU) is a sub-component of QU that involves extracting named attributes from user queries and linking them to existing e-commerce entities such as brand, material, color, etc. While extracting named entities from text has been extensively explored in the literature, QAU requires specific attention due to the nature of the queries, which are often short, noisy, ambiguous, and constantly evolving. This paper makes three contributions to QAU. First, we propose a novel end-to-end approach that jointly solves Named Entity Recognition (NER) and Entity Linking (NEL) and enables open-world reasoning for QAU. Second, we introduce a novel method for utilizing product graphs to enhance the representation of query entities. Finally, we present a new dataset constructed from public sources that can be used to evaluate the performance of future QAU systems. ## 1 Introduction Search queries are the main point of interaction between the customer and the search system. As such, extracting information from the queries is pivotal in surfacing the relevant products, making the task directly responsible for the quality of the overall customer experience. Query Understanding (QU) not only inherits all the challenges of standard natural language understanding but poses additional difficulties: queries are short and lack context, which makes them challenging to understand. They often contain implicit knowledge that Thomas Ricatte (∗) Amazon tricatte@amazon.com is difficult to capture without external reference. For example, the query "M2 laptop" refers to Apple laptops since M2 processors are only sold by Apple. Furthermore, customers do not have technical writing skills, which can result in queries that are noisy or use inappropriate search terms. In this work, we focus on the task of Query Attribute Understanding (QAU), which aims to extract the attribute values from the queries and make them usable for other downstream applications in the Search Engine (see fig. 1). QAU is related to another important task, Document Attribute Understanding (DAU), which aims to extract attributes from product descriptions. DAU has received significant attention from the community in the past years ((Zheng et al., 2018; Xu et al., 2019; Dong et al., 2020; Karamanolakis et al., 2020)) and does not suffer from the difficulties mentioned above and that are specific to queries. Both QAU and DAU are specific instances of Named Entity Recognition and Linking (NER/NEL), which aims to extract typed mentions from text. However, in contrast to classic NER, which usually handles fewer attribute types (such as Person, Location, and Organization), QAU and DAU deal with a larger number of attribute types (which can reach thousands in e-commerce as noted in (Xu et al., 2019)). We claim that three critical elements need to be addressed to get a practical solution to QAU. Firstly, named entity recognition should be performed jointly with entity linking, in order to map the detected entities to our knowledge base. Solving these tasks separately is not practical in an industrial context, as it leads to error propagation (linking module cannot make up for a wrong attribute prediction by the NER module) and more generally hidden technical debt (see (Sculley et al., 2015)). Furthermore, separating the tasks precludes the possibility of inductive transfer, which has been shown to be crucial in related tasks (Zhang and Yang, 2021; Caruana, 1997; Ruder, 2017). *These authors contributed equally to this work 126 ![1_image_0.png](1_image_0.png) Secondly, product graphs (PG) are becoming a new standard to represent e-commerce concepts and the relations between searchable products. Therefore, QAU systems should be able to leverage this new source of knowledge to improve their performance. Finally, QAU systems should always be designed with an "open-world" setup in mind to deal dynamically with new concepts. For instance, if we consider the query '*Sony A95K TV*', we should be able to detect that '*A95K*' is a mention representing a product line even if this product does not exist in our knowledge base. Note that extreme classification (Jain et al., 2016) is a possible alternative to classic NER/NER stacking, but it does not consider the coarse-grained nature of attributes (entities belong to different attribute types) and does not easily take into account the open-world nature of the task. Users can search for attribute values that are not yet in the knowledge base or not associated with any known product, making it difficult to predict normalized attribute values directly. ## Overview Of Our Approach To overcome the aforementioned limitations of existing approaches, we propose an *end-to-end multitask approach* that jointly predicts mentions, attribute types and entities. We build a shared representation of the text spans via a pre-trained transformer architecture (Liu et al., 2019). The shared span representation is used to determine the probability of the span being a mention, containing a particular attribute type, and representing a specific entity instance of that attribute. Our method can handle an open-world scenario where an attribute value does not have a matching entity in the knowledge base. In such cases, the model can still predict the attribute type of the value. Note that this approach is also data-efficient and can effectively utilize weakly labeled data points without entity annotations. Importantly, the end-to-end approach avoids error propagation since the entitylevel prediction is conditioned but not solely reliant on the attribute-level information. Additionally, our approach can handle overlapping spans without requiring additional adjustments. Finally, if the entities are structured in a knowledge graph, our approach can leverage its topology to enrich the entity embeddings. In order for our approach to be tested in scenarios with varying difficulties we need a dataset of queries of controllable complexity, along with a knowledge graph involving the entities there mentioned. To this end, we propose leveraging the products in the *Amazon Berkeley Objects* dataset (Collins et al., 2022) to construct a knowledge graph consisting of products related to their attribute values by relations encoding the attribute type. The product graph is used both as knowledge base for the approach and as starting artifact to generate a dataset of public synthetic queries. As public, non-confidential resources, we aim to release both artifacts for reproducibility and to encourage research in the field. Summarizing, our contributions are three-fold: 1. we propose AVEN, a novel end-to-end method that can effectively solve QAU in an open world setting; 2. we propose a way to use Product Graph to enrich the representation of the entities 3. we present a novel evaluation that combines a public product graph with a set of synthetic queries involving associated entities, aimed at promoting research on knowledge-based methods for QAU. ## 2 Related Work Document Attribute Understanding As previously noted, Query Attribute Understanding (QAU) shares similarities with Document Attribute Understanding (DAU), which has been previously addressed in the literature. (Zheng et al., 2018) proposed an early solution based on a classic NER pipeline that assigns each attribute type with a set of BIO (Beginning, Inside, Outside) tags. However, this approach suffers from scalability issues when dealing with a large number of attributes, and also hinders data sharing between head attributes (such as color) and tail attributes (such as glass color). To solve this issue, several approaches (Xu et al., 2019; Dong et al., 2020) based on Question Answering were pushed in the subsequent years. These approaches consider each attribute as a separate question to be answered leveraging the product description. The main advantage of Question Answering approaches is that they do not require a specific set of BIO tags for each attribute and are therefore more scalable. However, they are also harder to train and highly depend on the semantic representations of the attribute types. In practical cases in which the detected entity mentions must also be linked to normalized entities, Entity Linking is performed independently over the output of the NER step. While all these works consider an attribute value to be just a span of unstructured text, we aim to directly obtain normalized entities as attribute values, hence requiring performing Entity Linking over the detected spans. Entity Linking Entity Linking has been mostly studied in scenarios involving long documents with lot of context, while only few works exist for short sentences like queries. Most relevant to our work is ELQ (Li et al., 2020), in which a bi-encoder is employed to jointly perform mention detection and EL in a multi-task setup. Analogously, in Oliya et al. (2021) mention detection and entity linking are coupled with question answering in an end-to-end pipeline. We take inspiration from both works to tackle AVEN by injecting a new stage in the end-toend mention detection and entity linking pipeline, responsible for classifying the span attribute. Query Attribute Understanding While it may be tempting to view Query Attribute Understanding (QAU) as a simplified version of Document Attribute Understanding (DAU), this assumption overlooks the unique challenges posed by queries, such as their inherent noisiness, lack of context, and ambiguity. To the best of our knowledge, the only existing work that deals with both attribute value extraction and subsequent entity linking is QUEACO (Zhang et al., 2021). Differently from our approach, QUEACO is a fragmented model that stacks a user-behavior based normalization module over a NER pipeline. While we use user behavior in the data collection, we don't require it for the training and inference pipelines. ## 3 Data In order to have a controlled ground for experimentation, we need (i) a dataset of user queries, and (ii) a Knowledge Graph containing most entities involved in the user queries. Knowledge Graphs involving products and relative information are usually called product graphs. ## 3.1 Product Graphs A Product Graph is a Knowledge Graph involving a set of products and their corresponding attributes. Formally, it is a bipartite graph consisting of a vertex set V = (P ∪ A) containing products P and attribute values A connected by edges E = R1 ∪ R2 *∪ · · · ∪* Rm, where R1, R2*, . . . , R*m are set of edges for the different m attribute types. In practice, a triple (*p, r, a*) relates a product p with an attribute value a through an attribute relation r. ## 3.2 Synthetic Data Given the lack of a public Product Graph, we constructed one by leveraging the Amazon Berkeley Objects (ABO)1 dataset (Collins et al., 2022). The constructed graph not only lends itself to the overall inference pipeline, but can also be used to generate a set of synthetic queries that involve the entities of interest by construction. The generation procedure simply constructs queries as bag of attributes by starting from product nodes and walking the relations related to the attributes of interest, then discarding the product node in the final query and only keeping its attribute values along with the attribute type annotations. The generation pipeline is formalized in appendix C. To increase the complexity of the dataset, we also replace product types with synonyms found in the same *WordNet* synsets (Fellbaum, 1998). ## 3.3 Real User Queries Given the huge number of possible attribute values, manual annotation of user queries with attribute and entity-level labels is unfeasible. For this reason, we leverage a pre-trained NER model to obtain the attribute-level labels and employ a deterministic heuristic to label the corresponding attribute values with entity-level annotations. Let P be a set of purchased items, and Q be the queries that led to the purchase. First, we create a Product Graph P G from P by creating a triple (*p, r, a*) for each product p connected to an attribute value a through attribute type r. Then, for each query q ∈ Q, we iterate over each NER-annotated span (*r, v, s*), where span s holds value v for attribute type r. We now want to annotate the span s with two annotations, one at the attribute level and one at the entity level. For the former, we can keep the one detected from NER r. For the entity-level annotation instead, we choose to annotate s with the entity a such that (p, r, a) ∈ P G. In other words, given that NER has predicted the span to refer to an attribute type r, we annotate the span with the entity corresponding to the attribute value for r of the product that the user bought after searching for the query q. Assume for instance that an user looked for '*red Nike shoes*' and eventually bought some product p referring to a specific pair of shoes that are, in fact, red. In this case, the span s0,1 with value 'red' can be annotated to be a color as predicted by NER, while the entity label will be that of the value for p for the attribute color, which is the node corresponding to the value 'red' in the knowledge base. Of course, the user may also have eventually bought a black pair of shoes instead: in this case, the heuristic makes a mistake, and therefore the annotation is expected to be noisy. Nevertheless, assuming the query keywords to encode strong preferences when present, these cases are expected to be rare enough for the model to eventually learn to discard them as noise. ## 4 Approach The overall architecture of AVEN contains three different sub-modules, each responsible for a different task: (i) a mention detection module; (ii) an attribute classification module; (iii) an entity disambiguation module. The three modules are learnt jointly as shown in fig. 2 and each of them contributes to the final loss. The latter is obtained as a weighted sum of the three losses. While the coefficients are currently set to 1 for all the three tasks, we aim to eventually use GradNorm (Chen et al., 2018) to tune the loss weights. More formally, let us define q = q1*, . . . , q*n as an input query with n tokens/words. We denote by s[i,j]the sub-span qiqi+1 *. . . q*j . We are interested in three different quantities: Mij refers to span s[i,j] being a mention, Aa ij refers to the same span being an attribute value for attribute a, and finally Ee ij refers to s[i,j] being an instance of entity e. In the next sections, we will review the three different components. ## Mention Detection For a span s[i,j], we denote the span embedding by sij = fθ(s[i,j]). A simple version of fθ(s[i,j]) is the mean of the RoBERTa (Liu et al., 2019) embeddings of the tokens in s[i,j]. We can define the probability of span s[i,j] being an actual mention to be ## P(Mij ) = Σ (Gµ(Sij )) , where σ is the sigmoid function and gµ(·) is a parametric function taking in input the span representation and returning an unnormalized score. In our current implementation, this is realized as a MultiLayer Perceptron (MLP). Note that we employ the sigmoid as we assume that the probability of a span s[i,j]to be a mention does not depend on the probability of another span s[k,l]to be a mention. Note that, this assumption is questionable, especially as soon as s[i,j] and s[k,l] have a non-null intersection. Nevertheless, this choice allows the model to detect overlapping spans when faced with cases such as those exemplified in section 1. Note that it's always possible to add a *Non-Maximum Suppression* (NMS) step if we want to avoid producing overlapping annotations. The mention detector is trained by minimizing a *Binary Cross Entropy* loss ℓMD. ## Attribute Classification We are now interested in the probability that a span s[i,j] has attribute type a knowing that it is a mention $$\mathrm{P}\left(A_{i,j}^{(a)}\mid M_{i j}\right)=\frac{\exp\left(h_{\nu}^{(a)}(\mathbf{s}_{i,j})\right)}{\sum_{a^{\prime}\in A}\exp\left(h_{\nu}^{(a^{\prime})}(\mathbf{s}_{i,j})\right)},$$ where h (·) ν () is a parametric function taking into input the span representation. As for the mention detector, we employ a MLP. Note that we adopt a multi-task approach where we use the exact same span representation for the three different tasks, fostering information transfer among the latter. The attribute classifier is trained with a simple cross entropy loss ℓAC and only considers actual groundtruth mentions at train time. ## Entity Disambiguation In the entity disambiguation module, our goal is to estimate the probability P E (e) ij | Mij. Given the ![4_image_0.png](4_image_0.png) fact that each entity e is associated with an unique attribute type a = *type* (e), we can argue that this is actually equivalent to estimating the joint probability P E (e) ij , A(a) ij | MijSince the probability of a span to be type a is already given by the attribute classifier, we can just estimate for each possible attribute a $$\mathrm{P}\left(E_{i j}^{(e)}\mid A_{i j}^{(a)},M_{i j}\right)=\frac{\exp\left(v_{\xi}^{(a)}(\mathbf{s}_{i,j},\mathbf{e})\right)}{\sum_{e^{\prime}\in E}\exp\left(v_{\xi}^{(a)}(\mathbf{s}_{i,j},\mathbf{e}^{\prime})\right)}$$ , where v (a) ξ(·) is a parametric function taking into input the span representation and the entity e to be scored. The main advantage of this last expression is that it allows us to adopt a *divide-and-conquer* approach since for each attribute a, we only have to consider its compatible entities. Similarly to the attribute classifier, the entity disambiguator is learnt with a simple cross entropy loss ℓED on actual groundtruth mentions. Our first implementation of v (a) ξ(·) is a simple similarity scorer between the span representation and the embedding of the considered entity. Entity embeddings are computed by embedding a textual representation of their neighborhood in the knowledge graph, as illustrated in Figure 3. ## Inference To compute the probability of each span s[i,j] being a mention of entity e at inference time, we simply multiply the mention probability by the entity classification score. To improve efficiency, we exclude all spans s[i,j] with a mention probability P(Mij ) lower than a pre-defined threshold pmin, such as 0.5 in our experiments. ## Advantages Our approach shares the span representation across all three tasks: mention detection, attribute classification, and entity disambiguation, benefiting from the effectiveness of multi-task learning (Caruana, 1997; Ruder, 2017) in transferring knowledge between similar tasks. This is particularly relevant for our method as the tasks require different levels of label details: mention detection only requires weak labeling, while the attribute/entity tasks rely on associations between mentions and knowledge graph entities. Sharing the representation allows the entity disambiguation module to leverage weakly-labeled mention data, leading to a more data-efficient approach. ## 5 Experimental Results In this section, we present experimental results on two datasets described in section 3.2 and section 3.3. We provide a brief overview of the protocol used in both cases. ## 5.1 Considered Metrics Mention Detection We report both (micro) Precision and Recall for the mention detection task to validate the performance of the mention detector. The percentage of recalled mention will be a natural upper bound for the following metrics on attribute classification. Indeed, if we are not able to retrieve a mention, we will consider that we cannot be right at the subsequent tasks. ## Attribute Classification We report the multiclass Accuracy for the attribute classification task; This metric is computed on the set of ground-truths mentions and thus ignoring ![5_image_0.png](5_image_0.png) wrongly detected mentions (for which no attribute exists). We also present a complementary version of this metric, which focuses exclusively on groundtruth mentions that contain previously unseen "unknown" entities. This metric is only applicable to the second, more realistic dataset that includes novel entities in the test set. ## Entity Disambiguation We report the multiclass Accuracy for the entity disambiguation task; This one is computed only on the subset of ground-truth mentions containing entities seen at train time. ## 5.2 Baselines We consider the following models (i) **NER+Dict**: A RoBERTa-based NER baseline with dictionary lookup over the detected attributes. (ii) **NER+NN**: A RoBERTa-based NER baseline with nearest neighbor between detected attribute embeddings and entity embeddings. (iii) AVEN/AVEN-NC/**AVEN-GR**: Our end– to-end approach in three different flavours: with plain entity embedding, with plain entity embedding and no contextual span embedding and with product-graph based embeddings. ## 5.3 Results We report in fig. 4 (resp. fig. 5) the results from the synthetic dataset described in section 3.2 (resp. the actual user queries described in section 3.3). Overall, our AVEN- methods outperform the "stacked" methods (NER + separate entity linker), particularly on the task of known entity classification. Among our methods, AVEN-NC has a lower mention recall due to the lack of contextual span embedding. However, our methods are effective in predicting the attribute type of unseen entities, as | Model | Mention | Attribute | Entity | | |-----------|-----------|-------------|----------|------| | Precision | Recall | Accuracy | Accuracy | | | NER+Dict | 98.5 | 97.9 | 97.6 | 68.3 | | NER+NN | 98.1 | 97.7 | 97.5 | 64.3 | | AVEN | 95.2 | 93.5 | 93.3 | 83.2 | | AVEN-NC | 69.8 | 96.2 | 95.3 | 89.5 | | AVEN-GR | 97.8 | 97.4 | 97.2 | 76.3 | Figure 4: Results on synthetic data (see section 3.2) | Model | Mention | Attribute | Entity | | | |-----------|-----------|-------------|---------------|------|------| | Precision | Recall | Acc. | Acc. (unseen) | Acc. | | | NER+Dict | 89.9 | 93.6 | 93.2 | 88.1 | 81.5 | | NER+NN | 91.6 | 92.5 | 92.3 | 86.6 | 81.9 | | AVEN | 96.3 | 94.0 | 90.2 | 89.4 | 93.0 | | AVEN-NC | 88.2 | 93.8 | 91.5 | 82.4 | 95.3 | | AVEN-GR | 96.0 | 95.4 | 93.0 | 89.7 | 93.9 | evidenced by their performance on this task. It is worth noting that the attribute classification performance is lower for unseen attributes, which is expected. ## 6 Conclusions And Future Directions In this paper, we introduced a novel approach to tackle QAU in a multi-task fashion. We demonstrated its effectiveness on two datasets, compared to some simple baselines. However, further ablation studies on more datasets / baselines (e.g. Ayoola et al. 2022) are necessary to assess its generalization power. Additionally, future work will focus on improving the multitasking efficiency of AVEN, for instance by implementing (Chen et al., 2018). ## References Tom Ayoola, Shubhi Tyagi, Joseph Fisher, Christos Christodoulopoulos, and Andrea Pierleoni. 2022. ReFinED: An efficient zero-shot-capable approach to end-to-end entity linking. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, pages 209– 220, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. 2018. GradNorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of *Proceedings of Machine Learning Research*, pages 794–803. PMLR. Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, Matthieu Guillaumin, and Jitendra Malik. 2022. Abo: Dataset and benchmarks for real-world 3d object understanding. *CVPR*. Xin Luna Dong, Xiang He, Andrey Kan, Xian Li, Yan Liang, Jun Ma, Yifan Ethan Xu, Chenwei Zhang, Tong Zhao, Gabriel Blanco Saldana, et al. 2020. Autoknow: Self-driving knowledge collection for products of thousands of types. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2724–2734. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Himanshu Jain, Yashoteja Prabhu, and Manik Varma. 2016. Extreme multi-label loss functions for recommendation, tagging, ranking & other missing label applications. In *Proceedings of the 22nd ACM SIGKDD* International Conference on Knowledge Discovery and Data Mining, KDD '16, page 935–944, New York, NY, USA. Association for Computing Machinery. Giannis Karamanolakis, Jun Ma, and Xin Luna Dong. 2020. Txtract: Taxonomy-aware knowledge extraction for thousands of product categories. arXiv preprint arXiv:2004.13852. Belinda Z Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, and Wen-tau Yih. 2020. Efficient onepass end-to-end entity linking for questions. arXiv preprint arXiv:2010.02413. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Armin Oliya, Amir Saffari, Priyanka Sen, and Tom Ayoola. 2021. End-to-end entity resolution and question answering using differentiable knowledge graphs. arXiv preprint arXiv:2109.05817. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. *Advances in neural information processing systems*, 28. Huimin Xu, Wenting Wang, Xinnian Mao, Xinyu Jiang, and Man Lan. 2019. Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 5214– 5223. Danqing Zhang, Zheng Li, Tianyu Cao, Chen Luo, Tony Wu, Hanqing Lu, Yiwei Song, Bing Yin, Tuo Zhao, and Qiang Yang. 2021. Queaco: Borrowing treasures from weakly-labeled behavior data for query attribute value extraction. In *Proceedings of the 30th ACM International Conference on Information & Knowledge* Management, pages 4362–4372. Yu Zhang and Qiang Yang. 2021. A survey on multitask learning. IEEE Transactions on Knowledge and Data Engineering, pages 1–1. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*, pages 1049–1058. ## A Limitations Despite the promising results achieved by our approach, some limitations must be acknowledged. First, the use of product graphs as a knowledge source is a double-edged sword. Indeed, while it provides a valuable resource to exploit, the constant evolution of product graphs may create a strong coupling between the algorithm and the knowledge source, thus reducing the method's robustness over time. Second, our method's span-based approach makes it computationally expensive, requiring setting a maximum span size to circumvent this issue ## B Ethics Statement Our approach aims to boost the effectiveness of ecommerce search engines. However, by jointly optimizing multiple tasks, we run the risk of creating a less transparent system that could be susceptible to biases. These biases may lead to certain less frequent entities being overlooked or misclassified as more common ones, thereby reducing the overall fairness and accuracy of the system. ## C Synthetic Queries Generation Algorithm 1 outlines the synthetic query generation procedure. Algorithm 1 Synthetic queries generation. $\quad4\quad$ . $$A_{p}\leftarrow\left\|\right.$$ $\mathbb{M}$. 1: **procedure** GENERATE QUERIES(pg: ProductGraph) 2: P ← pg.products 3: A*cons* ← considered attributes 4: Q ← [] ▷ queries 5: **for all** product p in P do 6: Ap ← [] ▷ attributes for the product 7: T ← all triples (p, ∗, ∗) in pg 8: **for all** triple (p, a, r) in T do 9: if a in A*cons* **then** 10: Ap ← Ap ∪ a ▷ attribute values 11: Rp ← Rp ∪ r ▷ attribute types 12: **end if** 13: **end for** 14: shuffle Ap and Rp accordingly 15: qtext = str(Ap) ▷ query is a bag of attribute values 16: qann = Rp ▷ annotations 17: **end for** 18: **return** Q 19: **end procedure** ## D Prediction Inspection We present in fig. 6, an example of our qualitative evaluation within the QAU framework we have presented. ![7_image_0.png](7_image_0.png)
tan-etal-2023-gkd
{GKD}: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
https://aclanthology.org/2023.acl-industry.15
Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices. However, the deployment of knowledge distillation systems faces great challenges in real-world industrial-strength applications, which require the use of complex distillation methods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the switching of methods. To overcome these challenges, we propose GKD, a general knowledge distillation framework that supports distillation on larger-scale PLMs using various distillation methods. With GKD, developers can build larger distillation models on memory-limited GPUs and easily switch and combine different distillation methods within a single framework. Experimental results show that GKD can support the distillation of at least 100B-scale PLMs and 25 mainstream methods on 8 NVIDIA A100 (40GB) GPUs.
# Gkd: A General Knowledge Distillation Framework For Large-Scale Pre-Trained Language Model Shicheng Tan∗1, Weng Lam Tam2, Yuanchun Wang3**, Wenwen Gong**4, Shu Zhao†1, Peng Zhang2, **Jie Tang**†4 1Anhui University, 2Zhipu.AI, 3Renmin University of China, 4Tsinghua University tsctan@foxmail.com,{rainatam9784,frederickwang99}@gmail.com wenweng@mail.tsinghua.edu.cn,zhaoshuzs2002@hotmail.com peng.zhang@zhipuai.cn,jietang@tsinghua.edu.cn ## Abstract Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices. However, the deployment of knowledge distillation systems faces great challenges in real-world industrial-strength applications, which require the use of complex distillation methods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the switching of methods. To overcome these challenges, we propose GKD, a general knowledge distillation framework that supports distillation on larger-scale PLMs using various distillation methods. With GKD, developers can build larger distillation models on memorylimited GPUs and easily switch and combine different distillation methods within a single framework. Experimental results show that GKD can support the distillation of at least 100B-scale PLMs and 25 mainstream methods on 8 NVIDIA A100 (40GB) GPUs. 1 ## 1 Introduction Pre-trained language models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and their variants, have achieved excellent success in natural language processing (NLP) tasks when they usually have hundreds of millions of parameters. Considering computationally expensive resource constraints, a wide range of real-world applications are often impeded. Knowledge distillation (Hinton et al., 2015), as a method for compressing large-scale pre-trained language models, is attracting more and more attention. As large-scale PLMs continue to grow in scale, and with advancements in knowledge distillation methods, it becomes increasingly pressing to apply knowledge distillation research in controlled laboratory settings to the real world. The field of knowledge distillation for language models has witnessed a phenomenal progress in recent years, particularly with regards to the reduction of model size, leading to the development of a plethora of sophisticated distillation techniques (Liu et al., 2022; Wu et al., 2022) and a comprehensive toolkit (Yang et al., 2020b). However, despite these rich research outcomes, there are still major challenges in deploying knowledge distillation systems for real-world industrial-strength applications, including: - **Obstacles to Distilling Ultra-large-scale** PLMs. Contrary to distillation in controlled laboratory settings aimed at models with billions of parameters, many industrial-strength applications (Yu et al., 2022) rely on ultralarge-scale PLMs (on the order of 10B or even larger). The training of ultra-large-scale PLMs is already challenging, and the distillation process requires simultaneous training of both large and small models, leading directly to difficulties in distillation of ultra-large-scale PLMs. Furthermore, there are also methods (Wu et al., 2021a; Yuan et al., 2021) for distilling multiple large models into a single small model, which pose significant challenges in memory-constrained GPU environments. - **Obstacles to Switching Distillation Methods.** Deploying a knowledge distillation system requires the implementation of numerous distillation methods to meet different requirements, but due to the differences in implementation of these methods, it is difficult to switch and combine them easily within a framework. It is important to have an architecture that accommodates a range of distillation methods while ensuring efficient training, such as avoiding excessive extraction of intermediate features that lead to memory waste. Thus, a compatible and efficient architecture is crucial for successful deployment of knowledge distillation systems. To overcome these challenges, we present a general knowledge distillation framework (GKD) for deploying knowledge distillation systems that support various scale PLMs and methods. To overcome the obstacles to distilling ultra-large-scale PLMs, GKD leverages the techniques of training large transformer models to the distillation process that requires training multiple large (teacher) and small (student) models simultaneously, incorporating the latest model and data parallel strategies. To overcome the obstacles to switching distillation methods, GKD employs a dynamic hook mechanism and auxiliary model to extract and operate intermediate layer features and inference process of models in each iteration. While being compatible with various methods, it avoids the waste of memory caused by extracting all intermediate layers. GKD presents the first exploration of knowledge distillation for language models in industrial scenarios. Specifically, our main contribution lies in: - **Larger-scale Model Distillation.** We propose a teacher-student parallel strategy based on advanced memory optimization methods, addressing the challenge of distilling ultralarge-scale PLMs (over 10B) due to memory constraints. The proposed strategy supports distillation of at least 100B-scale PLMs on 8 NVIDIA A100 (40GB) GPUs. - **More Compatible Method Architecture.** We propose an efficient adaptive architecture compatible with various methods, addressing the challenge of switching and using different distillation methods within a single framework with difficulty. The proposed architecture supports at least 25 model distillation methods. - **Easy-to-use Open Source Toolkit.** We have open-sourced the required toolkit for GKD, which provides a command-line interface for 25 distillation methods, facilitating developers to deploy knowledge distillation systems for ultra-large-scale PLMs. ## 2 Related Work In recent years, knowledge distillation for compressing PLMs has gained increased attention. These works studied ways of better utilizing language model features for transferring knowledge from large teacher models to a smaller student model, involving hidden layers (Jiao et al., 2020), attention layers (Wang et al., 2021), soft labels (Jafari et al., 2021), and hard labels (Jafari et al., 2022). These works validated their methods with PLMs of hundreds of millions of parameters, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), etc. However, deployment of the distillation system on GPUs with limited memory has been hindered by the reliance on ultra-large-scale PLMs (10B or even larger). An offline distillation method (Liu et al., 2021) that saved teacher features before training the student individually reduced memory pressure, but was limited to methods with smaller feature scales and without teacher-student interaction. In this work, GKD was compatible with ultra-large-scale PLMs distillation via the introduction of Megatron-LM (Shoeybi et al., 2019) based on model parallelism and Zero Redundancy Optimizer (ZeRO) (Rajbhandari et al., 2020) based on data parallelism. While some code for knowledge distillation methods focused on language models was made public (Sanh et al., 2019; Jiao et al., 2020; Sun et al., 2020), there was a lack of a general framework for deploying knowledge distillation systems. TextBrewer (Yang et al., 2020b) packaged some abstract and simple distillation processes and loss functions, but lacked implementation of many methods and was difficult to adapt to increasingly complex distillation methods. There were significant differences in the implementation of these methods, such as DIITO (Wu et al., 2022) requiring dynamic intervention of the intermediate layer computation in the model; SID (Aguilar et al., 2020) changing the intermediate layer features during training; Continuation-KD (Jafari et al., 2022) altering the loss calculation method as the epoch increased, and so on. These differences in implementation made it difficult for them to be easily switched and combined within a single framework, hindering the application of various advanced methods in knowledge distillation systems. In this work, GKD accommodated various advanced knowledge distillation methods through a dynamic hook mechanism and auxiliary models. ![2_image_0.png](2_image_0.png) ## 3 Gkd In this section, we first introduce the overview framework of the proposed GKD, then delve into the details of how GKD implements larger-scale model distillation and a more compatible method architecture, from the perspective of model building and training. ## 3.1 Overview Framework Figure 1 shows the overview framework of GKD, which consists of six main processes: (1) *User Requirements:* This process begins with the user specifying their requirements and forming a configuration file, which includes the choice of training task, distillation method, teacher model, student model, etc. (2) *Model Building:* This process addresses the obstacles to distilling ultra-large-scale PLMs by implementing a teacher-student parallel strategy that combines Megatron-LM (Shoeybi et al., 2019) and ZeRO (Rajbhandari et al., 2020). The process involves selecting and executing parameter initialization strategies for the student model, such as initializing the student model with a pre-trained student, a truncated parameter teacher, random initialization methods, or other distilled students. It also includes initializing the training data with a tokenizer. (3) *Model Training:* This process addresses the obstacles to switching distillation methods by implementing an efficient adaptive architecture that is compatible with various methods. This process includes the initialization of methods to extract and compute different model features based on the requirements of different methods at different iteration numbers. (4) *Multiple Training:* This process is utilized for methods that require multiple training, such as task-specific methods (Jiao et al., 2020) that necessitate distillation in the task-specific stage after distillation in the pre-training stage. (5) *Analysis:* This process confirms the compliance of the distilled student model with deployment requirements through analysis, such as examining the performance on the test set and other phenomena that can be utilized to enhance the model. (6) *Deployment:* This process deploys the student model on the corresponding device, such as low-computing mobile devices or services with higher load deployment under equal computing power. These six processes are performed in sequence to form the workflow of the knowledge distillation system. The greatest contribution of GKD lies in the design of the model building and training, as the other processes do not pose a challenge to the deployment of the knowledge distillation system. In the following sections, we will provide a detailed description of how GKD enables largerscale model distillation in the model building and more compatible method architectures in the model training. ## 3.2 Model Building The challenge in building models lies in allocating ultra-large-scale PLMs, consisting of a student and one or more teacher models, on a GPU with only several tens of GB of memory. To address this challenge, we propose a teacher-student parallel strategy that splits the model parameters to different GPUs while preserving the feature distance computation between the teacher and student models. This strategy is inspired by the optimization of single ultra-large-scale PLMs, including MegatronLM (Shoeybi et al., 2019) which splits each parameter matrix in the transformer across multiple GPUs, and ZeRO (Rajbhandari et al., 2020) which partitions each layer of transformers sequentially across multiple GPUs. As shown in Figure 2, we demonstrate the comparison between the previous strategy and our proposed teacher-student parallel strategy using an example. The example includes the allocation of ![3_image_0.png](3_image_0.png) two 6-layer transformer teacher models and one 4layer transformer student model on the GPU. The current methods allocate all the model parameters on each GPU, severely limiting the training of ultralarge-scale PLMs and multiple models. To reduce the memory usage on each GPU without compromising the interaction between the teacher and the student, our teacher-student parallel strategy evenly distributes the parameters of the teacher and student on different GPUs, with each GPU corresponding to the matching parameters of the teacher and student. With the model parallel and data parallel count being 2, the memory usage can be reduced by at least half. If utilizing ZeRO-Offload (Ren et al., 2021), the optimizer states can further be stored in CPU memory to reduce the utilization of GPU memory. ## 3.3 Model Training The challenge in training models lies in how to easily switch and use different distillation methods within a single framework. To address this challenge, we propose an efficient adaptive architecture that is compatible with various methods. It implements the operation of different methods and the calculation of features through a dynamic hook extracting model features and operation hooks for modifying the model inference process during each iteration. These hooks are described by a configuration file similar to JSON, which only requires recording the operations required by the method and playing a role during the model inference process. The auxiliary model calculates the loss function based on these hooks and the returned model features. Table 1 describes the features that this architecture can adapt to existing methods. It is worth noting that GKD can achieve method combination by integrating hooks from different methods. GKD can also record all model features through extraction hooks and save the distance of teacher and student features in the auxiliary model for later analysis of the correlation between feature distance and task performance in the distillation process. ## 4 Experiments In this section, we verified that GKD, which is used for distillation of language models, can support at least 100B-scale parameters and 25 mainstream methods on 8 NVIDIA A100 (40GB) GPUs. ## 4.1 Experimental Setup Datasets All methods that require distillation in the pre-training stage use BooksCorpus (Zhu et al., 2015) and English Wikipedia as training data (19GB). For the task-specific stage (fine-tuning), we evaluate different distillation methods using the more challenging SuperGLUE benchmark (Wang | Compatible features | Representative methods | |-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Modify the inference process of the | DIITO (Wu et al., 2022), LRC-BERT (Fu et al., 2021), Theseus (Xu et al., 2020) | | model Dynamically modify the feature extraction or inference process | SID (Aguilar et al., 2020), Theseus (Xu et al., 2020) | | Additional trainable parameters | TinyBERT (Jiao et al., 2020), RAIL-KD (Haidar et al., 2022), Universal-KD (Wu et al., 2021b), LRC-BERT (Fu et al., 2021) | | Dynamically change loss function | Annealing-KD (Jafari et al., 2021), Continuation-KD (Jafari et al., 2022), MobileBERT (Sun et al., 2020) | | Complex intermediate layer calculation | CKD (Park et al., 2021), MGSKD (Liu et al., 2022), ALP-KD (Passban et al., 2021) | | Train student by multiple teachers | TMKD (Yang et al., 2020a), MT-BERT (Wu et al., 2021a), RL-KD (Yuan et al., 2021), Uncertainty (Li et al., 2021) | | Multiple training reduces teacher until | TAKD (Mirzadeh et al., 2020), DGKD (Son et al., 2021) | | student scale Other simple methods | KD (Hinton et al., 2015), PD (Turc et al., 2019), PKD (Sun et al., 2019), DistilBERT (Sanh et al., 2019), MiniLM (Wang et al., 2020), MiniLMv2 (Wang et al., 2021) | Table 1: The compatible features and representative methods of our proposed adaptive architecture. ## Et Al., 2019). Methods We tested 22 distillation methods specifically designed for language models, as well as three classic methods (KD, TAKD, and DGKD) from computer vision, which are listed in Tables 1 and 2. The implementation of the teacherstudent parallel strategy was carried out using the Megatron-LM (Shoeybi et al., 2019) and DeepSpeed (Rasley et al., 2020) framework. Models The commonly used BERT (Devlin et al., 2019) lacks open-source ultra-large-scale PLMs, so we employed a more advanced GLM (Du et al., 2022), which boasts open-source models of 10Bscale or even 130B-scale (Zeng et al., 2023), significantly reducing the deployment cost of the knowledge distillation system. The scale of teachers and students are presented in Tables 2 and 3. Refer to Appendix C for more implementation details. ## 4.2 Results More Compatible Method Architecture To verify the proposed adaptive architecture can effectively be compatible with various methods, we tested 25 mainstream distillation methods and present the results in Table 2. The results demonstrate that these methods can be easily switched and utilized in GKD. It is worth noting that TinyBERT (without data augmentation) outperformed all the latest methods in our setup. This suggests that the latest methods may not necessarily be the most effective, and different requirements may necessitate different methods. Additionally, the reliability of GKD is further validated from the perspective of loss function values in Appendix B.1. Larger-scale Model Distillation To verify the proposed teacher-student parallel strategy can support distillation of 100B-scale model on 8 NVIDIA A100 (40GB) GPUs, we present the memory and time consumption of different strategies for distilling models of varying scale in Table 3. The results indicate that previous strategies encountered GPU memory overflow when distilling 6B-scale models, whereas our strategy is capable of supporting the distillation of 100B-scale models. The results in rows 9, 10, and 11 respectively demonstrate that GPU memory consumption can be reduced through splitting the model parameters, optimizer states, or storing the optimizer states in CPU memory. If not limited to 8 GPUs, our strategy has the potential to distill even larger models. Appendix B.2 further examines the trade-off between memory and time consumption. ## 4.3 Further Exploration In addition to compatibility with various methods, GKD also allows for effortless combination of different methods. In Appendix A.1, we have discovered a method that achieves SOTA results by combining the advantages of different distillation methods. Appendix A.2 presents a tool that analyzes the correlation between feature distance and task performance through GKD, enhancing the interpretability of the distillation process. ## 5 Conclusions In this paper, we propose a general knowledge distillation framework, GKD, for deploying knowledge distillation systems targeting large-scale PLMs. GKD satisfies the demands of real-world applications by employing a parallel strategy and Methods ReCoRD COPA WSC RTE BoolQ WiC CB MultiRCavg F1/Acc. Acc. Acc. Acc. Acc. Acc. F1/Acc. F1a/EM GLMBase (teacher, 110M) 72.80/72.17 66.00 77.88 72.92 79.39 66.14 88.19/91.07 72.32/26.34 71.72 GLMLarge (teacher, 340M) 80.08/79.54 78.00 81.73 79.78 82.63 70.06 86.33/89.29 76.39/37.67 77.11 Single-teacher: Teacher (GLMBase) ⇒ Student (66M) KD (Hinton et al., 2015) 22.66/21.99 61.67 63.46 54.63 66.07 57.05 61.75/72.02 51.98/2.41 52.41 PD (Turc et al., 2019) 54.36/53.59 65.67 66.67 59.45 69.82 59.20 80.13/81.55 65.97/15.29 62.03 PKD (Sun et al., 2019) 61.77/60.99 60.00 65.38 68.83 77.73 65.78 82.76/85.12 69.99/22.67 66.17 DistilBERT (Sanh et al., 2019) 59.79/59.05 65.00 68.59 60.89 73.39 60.34 77.48/83.33 66.98/17.38 63.78 Theseus (Xu et al., 2020) 57.07/56.33 61.67 66.35 68.11 77.81 64.37 89.14/87.50 69.08/21.79 66.09 TinyBERT (Jiao et al., 2020) 65.60/64.88 70.33 **75.00 71.96 77.97 67.87 89.58/89.88 71.37/25.74 70.83** MobileBERT†(Sun et al., 2020) 59.29/58.61 65.33 68.59 58.97 74.61 63.85 86.65/88.69 66.87/19.41 65.14 SID (Aguilar et al., 2020) 27.17/26.19 65.00 65.06 58.12 69.33 57.16 51.02/73.81 59.26/14.55 55.08 MiniLM (Wang et al., 2020) 60.00/59.24 62.00 63.46 67.63 75.88 64.99 67.63/79.17 67.36/19.66 63.81 MiniLMv2 (Wang et al., 2021) 60.88/60.16 62.00 62.82 66.67 76.73 63.69 66.38/76.79 68.68/21.65 63.65 ALP-KD (Passban et al., 2021) 57.72/56.90 60.67 64.74 68.11 77.20 64.79 74.82/79.76 68.21/19.90 64.27 LRC-BERT (Fu et al., 2021) 55.10/54.44 65.67 66.67 56.56 74.86 57.63 80.27/81.55 65.75/16.16 62.25 Annealing-KD (Jafari et al., 2021) 56.08/55.39 69.33 66.67 58.97 70.57 59.82 85.78/85.12 66.26/13.92 63.33 CKD (Park et al., 2021) 56.35/55.65 65.00 66.67 61.25 71.63 58.83 88.61/84.52 66.11/15.22 63.33 Universal-KD (Wu et al., 2021b) 58.67/57.83 58.67 66.67 70.16 77.56 65.52 87.52/85.71 69.96/22.63 66.22 DIITO (Wu et al., 2022) 63.71/63.00 **72.00** 69.23 65.46 75.46 60.76 86.75/85.12 66.28/17.63 66.77 Continuation-KD (Jafari et al., 2022) 55.61/54.91 68.67 64.74 58.72 71.42 58.25 85.61/83.93 66.64/13.33 62.73 RAIL-KD (Haidar et al., 2022) 59.85/59.19 66.67 70.19 60.53 69.00 60.34 78.98/83.33 66.55/15.60 63.56 MGSKD (Liu et al., 2022) 50.29/49.49 65.00 65.06 65.94 73.31 63.17 83.89/84.52 67.32/15.56 63.50 Multi-teacher: Teachers (GLMBase and GLMLarge) ⇒ Student (66M) TMKD (Yang et al., 2020a) **65.77/65.09** 70.33 63.14 66.91 75.37 63.38 70.22/79.17 68.76/22.77 65.63 MT-BERT (Wu et al., 2021a) 46.81/46.08 59.00 63.46 65.46 66.90 62.33 78.76/80.36 57.53/2.06 59.12 RL-KD (Yuan et al., 2021) 59.78/58.99 58.33 66.03 69.07 77.93 65.78 76.87/82.74 69.24/22.21 65.26 Uncertainty (Li et al., 2021) 58.52/57.67 59.33 64.10 70.16 77.55 65.78 80.85/83.33 69.47/22.49 65.39 Teacher assistants: Teacher (GLMLarge) ⇒ Assistant (200M) ⇒ Assistant (110M) ⇒ Student (66M) TAKD (Mirzadeh et al., 2020) 25.50/24.69 60.33 66.03 55.11 66.39 57.94 76.28/76.79 55.90/1.50 54.52 DGKD (Son et al., 2021) 23.68/22.96 61.00 66.99 55.96 65.71 58.73 75.45/75.60 48.06/1.50 54.00 Table 2: Results of 25 mainstream distillation methods implemented using GKD on the SuperGLUE validation set. Due to the alteration of the model structure by MobileBERT†, the parameters of the teacher and student models are 293M and 25M, respectively. ⇒ denotes distillation process. The results for all methods were averaged over three random seeds. Table 3: The consumption of memory and time during the pre-training stage of TinyBERT when distilling teacher models of different scales on 8 NVIDIA A100 (40GB) GPUs is presented. The micro batch and gradient accumulation steps are set to 1. Where MA denotes the maximum memory allocated on the GPU, CA denotes the maximum cached memory on the GPU, Time denotes the time required to train each sample, Mem denotes the size of occupied CPU memory, MP denotes the number of model parallelism, DP denotes the number of data parallelism, ZeRO denotes whether the optimizer states are partitioned across different GPUs, and Offload denotes whether the optimizer states are stored in CPU memory. | Strategy | Teacher⇒Student (scale) | MA (GB) | CA (GB) | Time (ms) | Mem (GB) | MP | DP | ZeRO | Offload | |------------|---------------------------|-----------|-----------|-------------|------------|------|------|--------|-----------| | 110M⇒22M | 0.99 | 1.27 | 10.40 | 56.96 | 1 | 8 | | | | | 110M⇒66M | 1.73 | 2.02 | 10.82 | 57.60 | 1 | 8 | | | | | Previous | 340M⇒66M | 3.11 | 3.58 | 16.41 | 63.46 | 1 | 8 | | | | 5B⇒1B | 32.44 | 36.57 | 53.34 | 61.58 | 1 | 8 | | | | | 6B⇒1.2B | GPU memory overflow | 1 | 8 | | | | | | | | 6B⇒1.2B | 18.91 | 21.40 | 85.61 | 57.28 | 2 | 4 | | | | | 7.5B⇒1.5B | 24.22 | 27.36 | 87.08 | 60.44 | 2 | 4 | | | | | 10B⇒2B | 30.91 | 34.54 | 105.40 | 62.33 | 2 | 4 | | | | | 10B⇒2B | 18.45 | 22.56 | 119.72 | 68.68 | 2 | 4 | ✓ | | | | 10B⇒2B | 15.83 | 22.55 | 387.19 | 106.35 | 2 | 4 | ✓ | ✓ | | | 25B⇒5B | 20.41 | 23.51 | 379.38 | 63.07 | 8 | 1 | | | | | 50B⇒10B | 17.93 | 20.94 | 4570.54 | 230.27 | 8 | 1 | ✓ | ✓ | | | 65B⇒13B | 22.48 | 26.10 | 6412.11 | 293.11 | 8 | 1 | ✓ | ✓ | | | 90B⇒18B | 30.56 | 35.27 | 7193.26 | 373.81 | 8 | 1 | ✓ | ✓ | | | 100B⇒20B | 33.62 | 36.88 | 9081.97 | 410.83 | 8 | 1 | ✓ | ✓ | | | 110B⇒22B | GPU memory overflow | 8 | 1 | ✓ | ✓ | | | | | | Ours | | | | | | | | | | adaptive architecture, allowing for the distillation of ultra-large scale PLMs (over 10B) and the switch of various advanced distillation methods. In the future, we plan to launch our knowledge distillation system for facilitating the mass production and deployment of student models. ## Acknowledgements This work is supported by Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA0108402, the Natural Science Foundation of China under Grant No. 61836013, the Major Program of the National Social Science Foundation of China under Grant No. 18ZDA032, and funds from CCFZhipu.AI and Beijing Academy of Artificial Intelligence (BAAI). The GPUs used are sponsored by Zhipu.AI. ## References Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. 2020. Knowledge distillation from internal representations. In *AAAI*, pages 7350–7357. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In ACL, pages 320–335. Hao Fu, Shaojun Zhou, Qihong Yang, Junjie Tang, Guiquan Liu, Kaikui Liu, and Xiaolong Li. 2021. Lrcbert: Latent-representation contrastive knowledge distillation for natural language understanding. In AAAI, pages 12830–12838. Md. Akmal Haidar, Nithin Anchuri, Mehdi Rezagholizadeh, Abbas Ghaddar, Philippe Langlais, and Pascal Poupart. 2022. Rail-kd: Random intermediate layer mapping for knowledge distillation. In *NAACL*, pages 1389–1400. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, and Ali Ghodsi. 2022. Continuation kd: Improved knowledge distillation through the lens of continuation optimization. In *EMNLP*, page 5260–5269. Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, and Ali Ghodsi. 2021. Annealing knowledge distillation. In *EACL*, pages 2493–2504. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling bert for natural language understanding. In *EMNLP*, pages 4163–4174. Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. 2021. Dynamic knowledge distillation for pre-trained language models. In *EMNLP*, pages 379–389. Chang Liu, Chongyang Tao, Jiazhan Feng, and Dongyan Zhao. 2022. Multi-granularity structural knowledge distillation for language model compression. In ACL, pages 1001–1011. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Yuanxin Liu, Fandong Meng, Zheng Lin, Weiping Wang, and Jie Zhou. 2021. Marginal utility diminishes: Exploring the minimum knowledge for bert knowledge distillation. In ACL, pages 2928–2941. Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. 2020. Improved knowledge distillation via teacher assistant. In *AAAI*, pages 5191–5198. Geondo Park, Gyeongman Kim, and Eunho Yang. 2021. Distilling linguistic context for language model compression. In *EMNLP*, pages 364–378. Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. 2021. Alp-kd: Attention-based layer projection for knowledge distillation. In *AAAI*, pages 13657–13665. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: memory optimizations toward training trillion parameter models. In SC, page 20. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In KDD, pages 3505– 3506. Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, and Yuxiong He. 2021. Zerooffload: Democratizing billion-scale model training. In ATC, pages 551–564. USENIX Association. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053. Wonchul Son, Jaemin Na, Junyong Choi, and Wonjun Hwang. 2021. Densely guided knowledge distillation using multiple teacher assistants. In *ICCV*, pages 9375–9384. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. In *EMNLP*, pages 4322–4331. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. Mobilebert: a compact task-agnostic bert for resource-limited devices. In ACL, pages 2158–2170. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. *CoRR*, abs/1908.08962. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In *NeurIPS*, pages 3261–3275. Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. 2021. Minilmv2: Multi-head selfattention relation distillation for compressing pretrained transformers. In ACL, pages 2140–2151. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. In *NeurIPS*. Chuhan Wu, Fangzhao Wu, and Yongfeng Huang. 2021a. One teacher is enough? pre-trained language model distillation from multiple teachers. In ACL, pages 4408–4413. Yimeng Wu, Mehdi Rezagholizadeh, Abbas Ghaddar, Md. Akmal Haidar, and Ali Ghodsi. 2021b. Universal-kd: Attention-based output-grounded intermediate layer knowledge distillation. In *EMNLP*, pages 7649–7661. Zhengxuan Wu, Atticus Geiger, Joshua Rozner, Elisa Kreiss, Hanson Lu, Thomas Icard, Christopher Potts, and Noah D. Goodman. 2022. Causal distillation for language models. In *NAACL*, pages 4288–4295. Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. Bert-of-theseus: Compressing bert by progressive module replacing. In *EMNLP*, pages 7859–7869. Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, and Daxin Jiang. 2020a. Model compression with twostage multi-teacher knowledge distillation for web question answering system. In *WSDM*, pages 690– 698. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In *NeurIPS*, pages 5754–5764. Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, and Guoping Hu. 2020b. Textbrewer: An open-source knowledge distillation toolkit for natural language processing. In ACL, pages 9–16. Jifan Yu, Xiaohan Zhang, Yifan Xu, Xuanyu Lei, Xinyu Guan, Jing Zhang, Lei Hou, Juanzi Li, and Jie Tang. 2022. Xdai: A tuning-free framework for exploiting pre-trained language models in knowledge grounded dialogue generation. In KDD, pages 4422–4432. Fei Yuan, Linjun Shou, Jian Pei, Wutao Lin, Ming Gong, Yan Fu, and Daxin Jiang. 2021. Reinforced multiteacher selection for knowledge distillation. In *AAAI*, pages 14284–14291. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023. GLM130B: an open bilingual pre-trained model. In *ICLR*. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *ICCV*, pages 19–27. IEEE Computer Society. ## A Further Exploration In this section, we further explore the capabilities of GKD in combining different distillation methods and enhancing the interpretability of the distillation process. ## A.1 Method Combination Thanks to the dynamic hook mechanism, GKD is capable of combining methods by integrating hooks from different methods. As shown in Table 4, we demonstrate results from several dozen combinations of different model features. To conserve computational power, we set the batch size to 32 during pre-training and set the sizes of the teacher and student models to 110M and 22M, respectively. In the task-specific stage, the batch size and learning rate were fixed at 16 and 1e-5, respectively, without the use of grid search and seed averaging. Based on the results in Table 4, the following conclusions can be drawn. (1) We discovered the method BestC which achieves the SOTA, outperforming TinyBERT by 1.24% on average in SuperGLUE. BestC combines the features of TinyBERT, MiniLMv2, and soft labels. (2) The method that performs distillation in the pre-training stage (row 5) outperforms those using randomly initialized parameters (row 3) or truncated fine-tuned teacher parameters (row 4) in the pre-training stage. (3) The methods using soft labels in the pre-training stage (rows 14 and 17) outperform those not using soft labels (rows 12 and 16). (4) Starting from row 21, we compare | Methods | Pre-training stage | Task-specific stage | SG | | | | | | | | | | | | | |----------------------------------------|----------------------------------------|-----------------------|---------|---------|---------|-------|---------|-------|---------|-------|-------|-------|-------|-------|-------| | Emb | Att | Q/K | V | HS | Soft | Hard | Emb | Att | Q/K | V | HS | Soft | Hard | | | | KD | Random initialization parameters | CE | CE | 49.48 | | | | | | | | | | | | | Truncate fine-tuned teacher parameters | CE | CE | 51.62 | | | | | | | | | | | | | | CE | CE | CE | CE | 59.68 | | | | | | | | | | | | | CE | CE | CE | 60.24 | | | | | | | | | | | | | | KL | CE | CE | 60.62 | | | | | | | | | | | | | | KL | CE | 63.16 | | | | | | | | | | | | | | | MSE | CE | 63.46 | | | | | | | | | | | | | | | RAIL-KD | Truncate fine-tuned teacher parameters | MSE−f | CE | CE | 51.63 | | | | | | | | | | | | MiniLM | KLf | KLf | CE | 60.65 | | | | | | | | | | | | | MiniLMv2 | KLf | KLf | CE | 60.47 | | | | | | | | | | | | | KLf | KLf | KL | KLf | KLf | CE | 65.41 | | | | | | | | | | | KLf | KLf | KL | CE | 64.64 | | | | | | | | | | | | | MGSKD | MSE | MSE | MSE | MSE/HL | MSE/HL | KL | 59.65 | | | | | | | | | | TinyBERT | MSE | MSE | MSE | MSE | MSE | MSE | CE | 65.81 | | | | | | | | | MSE | MSE | MSE | KL | MSE | MSE | MSE | CE | 66.19 | | | | | | | | | MSE | MSE | MSE | KL | CE | 62.75 | | | | | | | | | | | | MSE | MSE | MSE | CE | 63.52 | | | | | | | | | | | | | MSE | MSE | KL | MSE | MSE | CE | 66.51 | | | | | | | | | | | Mix5 | MSE | MSE+KLf | KLf | KLf | MSE+Cos | KL | CE | MSE | MSE+KLf | KLf | KLf | MSE | CE | CE | 65.63 | | MSE | MSE+KLf | KLf | KLf | MSE+Cos | KL | CE | CE | 62.18 | | | | | | | | | MSE | MSE | KLf | KLf | MSE+Cos | KL | CE | MSE | MSE | KLf | KLf | MSE | CE | CE | 66.58 | | | MSE | MSE | KLf | KLf | MSE+Cos | KL | CE | CE | 63.06 | | | | | | | | | MSE | MSE+KLf | KLf | MSE+Cos | KL | CE | MSE | MSE+KLf | KLf | MSE | CE | CE | 66.25 | | | | | MSE | MSE+KLf | KLf | MSE+Cos | KL | CE | CE | 62.86 | | | | | | | | | | MSE | MSE+KLf | KLf | KLf | MSE | CE | MSE | MSE+KLf | KLf | KLf | MSE | CE | CE | 66.54 | | | | MSE | MSE+KLf | KLf | KLf | MSE | CE | CE | 64.64 | | | | | | | | | | KLf | KLf | KLf | KL | CE | KLf | KLf | KLf | CE | CE | 64.68 | | | | | | | KLf | KLf | KLf | KL | CE | CE | 60.49 | | | | | | | | | | | BestC | MSE | KLf | KLf | MSE | KL | MSE | KLf | KLf | MSE | CE | 67.05 | | | | | | MSE | KLf | KLf | MSE | KL | CE | 62.71 | | | | | | | | | | | MSE | KLf | KLf | MSEf | KL | MSE | KLf | KLf | MSEf | CE | 65.73 | | | | | | | MSE | KLf | KLf | MSEf | KL | CE | 62.53 | | | | | | | | | | | MSE | MSEf | KL | MSE | MSEf | CE | 66.17 | | | | | | | | | | | MSE | MSEf | KL | CE | 62.58 | | | | | | | | | | | | | MSEf | KL | MSEf | CE | 65.79 | | | | | | | | | | | | | MSEf | KL | CE | 63.69 | | | | | | | | | | | | | | KLf | KLf | MSEf | KL | KLf | KLf | MSEf | CE | 66.01 | | | | | | | | | KLf | KLf | MSEf | KL | CE | 63.87 | | | | | | | | | | | | MSE | KLf | KLf | KL | MSE | KLf | KLf | CE | 66.53 | | | | | | | | | MSE | KLf | KLf | KL | CE | 63.26 | | | | | | | | | | | | MSE | KLf | KLf | MSEf2 | KL | MSE | KLf | KLf | MSEf2 | CE | 65.55 | | | | | | | MSE | KLf | KLf | MSEf2 | KL | CE | 62.56 | | | | | | | | | | | MSE | KLf | KLf | MSE−f | KL | MSE | KLf | KLf | MSE−f | CE | 66.22 | | | | | | | MSE | KLf | KLf | MSE−f | KL | CE | 63.26 | | | | | | | | | | | KLf | KLf | MSE | KL | KLf | KLf | MSE | CE | 66.78 | | | | | | | | | KLf | KLf | MSE | KL | CE | 63.12 | | | | | | | | | | | ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) the results of various combinations distilled in the task-specific stage and not distilled (only trained on hard labels). We find that distillation in the taskspecific stage greatly improves the performance of the task. ## A.2 Enhanced Interpretability Thanks to the adaptive architecture, GKD can record all model features through extraction hooks and save the distance of teacher and student fea- ![9_image_3.png](9_image_3.png) ![9_image_0.png](9_image_0.png) ![9_image_2.png](9_image_2.png) ![9_image_1.png](9_image_1.png) tures in the auxiliary model for later analysis of the correlation between feature distance and task performance in the distillation process. As an example of TinyBERT's pre-training stage distillation, we present the Spearman and Pearson correlation coefficients between the feature distance and training loss, and between the feature distance and task performance, respectively, in Figures 4 and 5. The following conclusions can be drawn. (1) The results shown in Figure 4 indicate that while TinyBERT trains its embedding layer, attention scores, and hidden state, many other features (e.g., the value matrix and features obtained after pair-wise scaled dot-product) also decrease in distance between teacher and student as the training loss decreases. This suggests that we may be able to find a way to automatically have a large number of student features approach the teacher without having to distill all features, thus reducing the cost of distillation. (2) The results shown in Figure 5 indicate that the distillation in the pre-training stage of TinyBERT actually leads to a decrease in pre-training task performance. This suggests that the performance of pre-training tasks is not necessarily positively correlated with the performance of downstream tasks. It is noteworthy that there are a small number of features (e.g., soft labels) whose distance is related to task performance. Our hypothesis is that distilling features that are related to task performance may further improve task performance, and the third conclusion in Appendix A.1 supports this hypothesis. ![10_image_2.png](10_image_2.png) ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ## Additional Analysis B In this section, we further verify the reliability of GKD from the perspective of loss function value, and analyze the balance of memory and time consumption in the teacher-student parallel strategy. ## Are The Loss Values Of Gkd Normal? B.1 In order to further verify the reliability of GKD, we present the loss function values of each method at various distillation stages in Figure 6. The downward trend of all the loss values is consistent with our expectations, with two noteworthy observations: (1) MobileBERT and SID tend to gradually increase the number of distilled layers during training, hence the loss values exhibit an up-and-down trend. (2) The ReCoRD dataset, shown in taskspecific stages, was trained for 5 epochs, therefore some methods may show loss changes in stair-step fashion, such as Annealing-KD and Universal-KD. ## B.2 Trade-Off Between Memory And Time Consumption In order to speed up the training process while ensuring that the distillation process is not limited by GPU memory, we conducted a full combination of all optimization options to find the best balance between memory and time. Table 5 showcases the resource usage of 5B-scale and 10B-scale teacher models under different MP, DP, ZeRO, and Offload options during distillation. The results of the testing lead us to the following recommendations: In cases of insufficient GPU memory, ZeRO should ![11_image_0.png](11_image_0.png) Teacher⇒Student (scale) **MA (GB) CA (GB) Time (ms) Mem (GB) MP DP ZeRO Offload** 17.65 33.11 169.01 95.15 1 8 ✓† ✓† 9.07 14.97 262.49 86.56 2 4 ✓† ✓† 4.87 6.09 430.61 86.20 4 2 ✓† ✓† 2.72 3.71 884.05 82.61 8 1 ✓† ✓† 17.65 30.77 175.08 95.13 1 8 ✓ ✓ 9.06 13.74 252.99 86.74 2 4 ✓ ✓ 4.87 5.79 437.50 86.01 4 2 ✓ ✓ 2.72 3.43 831.22 83.24 8 1 ✓ ✓ 18.92 31.04 61.43 70.50 1 8 ✓† 10.44 14.24 78.97 62.18 2 4 ✓† 6.31 7.42 129.91 61.59 4 2 ✓† 4.23 5.10 260.96 **58.72** 8 1 ✓† 18.92 28.81 60.25 70.52 1 8 ✓ 10.43 13.73 80.64 62.38 2 4 ✓ 6.31 7.32 129.40 62.27 4 2 ✓ 4.23 5.04 243.80 58.76 8 1 ✓ 32.44 36.57 **53.34** 61.58 1 8 16.34 18.50 68.17 62.18 2 4 8.31 9.41 121.26 62.16 4 2 4.27 5.04 231.95 58.83 8 1 30.73 36.89 226.51 104.46 1 8 ✓† ✓† 15.84 24.19 378.93 106.31 2 4 ✓† ✓† 8.41 10.08 664.45 95.51 4 2 ✓† ✓† 4.70 6.00 1210.07 98.12 8 1 ✓† ✓† 30.73 36.89 222.50 104.45 1 8 ✓ ✓ 15.83 22.55 387.19 106.35 2 4 ✓ ✓ 8.41 9.72 693.03 95.50 4 2 ✓ ✓ 4.70 5.81 1224.11 98.09 8 1 ✓ ✓ 33.23 36.91 **85.52** 66.17 1 8 ✓† 18.46 23.13 119.30 68.61 2 4 ✓† 11.11 12.91 186.53 **57.42** 4 2 ✓† 7.53 8.87 310.56 59.88 8 1 ✓† 33.23 36.90 88.21 66.13 1 8 ✓ 18.45 22.56 119.72 68.68 2 4 ✓ 11.11 12.78 198.84 57.45 4 2 ✓ 7.53 9.01 329.12 59.83 8 1 ✓ GPU memory overflow 1 8 30.91 34.54 105.40 62.33 2 4 15.64 17.62 174.30 57.79 4 2 8.00 8.98 311.44 59.73 8 1 | 5B⇒1B 10B⇒2B | |----------------| ![13_image_0.png](13_image_0.png) be considered first for partitioning the optimizer states and model gradients, followed by increasing the number of model parallelism, and lastly, using ZeRO-Offload to store the optimizer states and model gradients in CPU memory. ## C Implementation Details In this section, we provide further details regarding the hyperparameters and models to facilitate replication by developers. ## C.1 Hyperparameters The batch size, number of iterations, and peak learning rate for the pre-training stage were set to 64, 150000, and 4e-4, respectively. The taskspecific hyperparameters for specific methods were set to the optimal values from their corresponding papers, while other hyperparameters (see Table 6) were kept consistent with the fine-tuning teacher. For single-teacher methods in the taskspecific stage, grid search was used to optimize hyperparameters, including learning rate {5e-6,1e5,2e-5} and batch size {16,32}. Table 7 presents the learning rate and batch size for each method on each dataset in the SuperGLUE benchmark. The results for all methods were averaged over three random seeds. ## C.2 Models Table 8 shows the specific parameters of all the models utilized in this paper. The 110M, 340M, and 10B scale models are from GLM pre-trained models 2. The 293M-scale model with the MobileBERT structure (inverted-bottleneck structure) was obtained by us through a week of pre-training with 16 NVIDIA A100 (40GB) GPUs, and the 25Mscale model is also with the MobileBERT structure. 2https://github.com/THUDM/GLM When conducting pre-training tasks, the models with the MobileBERT structure require the expansion of the token dimension, thus the actual number of parameters is greater than the scale. The other sized teacher models were tested with randomly initialized parameters to assess resource consumption. All the distillation processes were conducted using half-precision floating-point (fp16) models. | Methods | ReCoRD | COPA | WSC | RTE | BoolQ | WiC | CB | MultiRC | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------|----------|----------|----------|----------|----------|----------|-----------| | bs/lr | bs/lr | bs/lr | bs/lr | bs/lr | bs/lr | bs/lr | bs/lr | | | GLMBase (teacher, 110M) | bs (batch size) = 16, lr (learning rate) = 1E-5 | | | | | | | | | GLMLarge (teacher, 340M) | Single-teacher: Teacher (GLMBase) ⇒ Student (66M) | | | | | | | | | KD (Hinton et al., 2015) | 16/5E-06 | 16/2E-05 | 16/1E-05 | 16/2E-05 | 16/2E-05 | 16/5E-06 | 16/2E-05 | 16/5E-06 | | PD (Turc et al., 2019) | 16/1E-05 | 32/5E-06 | 16/2E-05 | 16/1E-05 | 32/1E-05 | 16/5E-06 | 16/2E-05 | 16/5E-06 | | PKD (Sun et al., 2019) | 32/2E-05 | 32/2E-05 | 16/2E-05 | 32/5E-06 | 16/1E-05 | 16/5E-06 | 16/2E-05 | 32/2E-05 | | DistilBERT (Sanh et al., 2019) | 16/1E-05 | 16/2E-05 | 16/1E-05 | 16/5E-06 | 32/2E-05 | 32/2E-05 | 32/2E-05 | 16/1E-05 | | Theseus (Xu et al., 2020) | 32/2E-05 | 16/1E-05 | 16/1E-05 | 32/1E-05 | 16/1E-05 | 32/1E-05 | 16/2E-05 | 32/5E-06 | | TinyBERT (Jiao et al., 2020) | 32/1E-05 | 16/5E-06 | 32/5E-06 | 16/2E-05 | 16/1E-05 | 16/5E-06 | 16/1E-05 | 16/1E-05 | | MobileBERT (Sun et al., 2020) | 16/1E-05 | 16/1E-05 | 32/2E-05 | 32/2E-05 | 32/2E-05 | 32/1E-05 | 32/2E-05 | 16/5E-06 | | SID (Aguilar et al., 2020) | 16/2E-05 | 32/5E-06 | 16/5E-06 | 16/2E-05 | 16/2E-05 | 16/2E-05 | 16/1E-05 | 16/2E-05 | | MiniLM (Wang et al., 2020) | 16/2E-05 | 32/1E-05 | 32/2E-05 | 32/1E-05 | 16/1E-05 | 16/1E-05 | 32/1E-05 | 32/2E-05 | | MiniLMv2 (Wang et al., 2021) | 16/1E-05 | 16/1E-05 | 16/5E-06 | 32/2E-05 | 16/2E-05 | 32/2E-05 | 16/1E-05 | 16/1E-05 | | ALP-KD (Passban et al., 2021) | 16/2E-05 | 16/1E-05 | 16/2E-05 | 16/2E-05 | 16/2E-05 | 32/2E-05 | 16/2E-05 | 32/2E-05 | | LRC-BERT (Fu et al., 2021) | 16/2E-05 | 32/1E-05 | 16/2E-05 | 32/1E-05 | 16/2E-05 | 16/5E-06 | 16/2E-05 | 16/5E-06 | | Annealing-KD (Jafari et al., 2021) | 16/2E-05 | 16/5E-06 | 16/2E-05 | 16/2E-05 | 16/2E-05 | 32/5E-06 | 16/1E-05 | 32/5E-06 | | CKD (Park et al., 2021) | 32/2E-05 | 16/2E-05 | 16/5E-06 | 16/1E-05 | 16/2E-05 | 16/1E-05 | 16/1E-05 | 32/2E-05 | | Universal-KD (Wu et al., 2021b) | 32/2E-05 | 32/5E-06 | 32/5E-06 | 32/1E-05 | 32/5E-06 | 16/5E-06 | 16/1E-05 | 16/1E-05 | | DIITO (Wu et al., 2022) | 16/5E-06 | 32/1E-05 | 16/2E-05 | 16/1E-05 | 16/2E-05 | 16/1E-05 | 16/1E-05 | 16/5E-06 | | Continuation-KD (Jafari et al., 2022) | 16/2E-05 | 32/1E-05 | 16/1E-05 | 16/1E-05 | 16/2E-05 | 32/1E-05 | 16/1E-05 | 16/5E-06 | | RAIL-KD (Haidar et al., 2022) | 16/1E-05 | 16/1E-05 | 16/2E-05 | 16/5E-06 | 32/2E-05 | 16/1E-05 | 32/1E-05 | 32/2E-05 | | MGSKD (Liu et al., 2022) | 16/5E-06 | 16/2E-05 | 32/2E-05 | 16/5E-06 | 16/5E-06 | 16/1E-05 | 32/2E-05 | 32/5E-06 | | Multi-teacher: Teachers (GLMBase and GLMLarge) ⇒ Student (66M) | | | | | | | | | | TMKD (Yang et al., 2020a) MT-BERT (Wu et al., 2021a) | same as GLMBase | | | | | | | | | RL-KD (Yuan et al., 2021) Uncertainty (Li et al., 2021) Teacher assistants: Teacher (GLMLarge) ⇒ Assistant (200M) ⇒ Assistant (110M) ⇒ Student (66M) TAKD (Mirzadeh et al., 2020) same as KD DGKD (Son et al., 2021) | | | | | | | | | Table 7: Hyperparameters for all methods in Table 2 on the 8 datasets of the SuperGLUE benchmark. | Scale | #Parameters | #Dimensions | #Layers | #Heads | Max-seq | Vocabulary | |---------|---------------|---------------|-----------|----------|-----------|--------------| | 22M | 22788864 | 384 | 6 | 12 | | | | 25M | 37371392 | 128 | 24 | 4 | | | | 66M | 66811392 | 768 | 6 | 12 | 512 | 30592 | | 110M | 109338624 | 768 | 12 | 12 | | | | 293M | 306174464 | 1024 | 24 | 4 | | | | 340M | 334688256 | 1024 | 24 | 16 | | | | 1B | 1022682240 | 1728 | 26 | | | | | 1.2B | 1173458944 | 1792 | 28 | | | | | 1.5B | 1521700224 | 1984 | 30 | | | | | 2B | 1920122880 | 2048 | 36 | | | | | 5B | 5030587776 | 3264 | 38 | | | | | 6B | 5915828736 | 3456 | 40 | | | | | 7.5B | 7385878656 | 3776 | 42 | | | | | 10B | 9880682496 | 4096 | 48 | | | | | 13B | 13170418176 | 4736 | 48 | | | | | 18B | 18125342976 | 5248 | 54 | | | | | 20B | 20175676160 | 5440 | 56 | | | | | 22B | 22104152064 | 5504 | 60 | | | | | 25B | 24660072448 | 5632 | 64 | | | | | 50B | 49577504000 | 8000 | 64 | | | | | 65B | 64813768448 | 9152 | 64 | | | | | 90B | 89957891328 | 10624 | 66 | | | | | 100B | 99465734144 | 11008 | 68 | | | | | 110B | 109620044032 | 11392 | 70 | 64 | 1024 | 50304 | Table 8: The scale details of all the models utilized in this paper.
wang-etal-2023-fashionklip
{F}ashion{KLIP}: Enhancing {E}-Commerce Image-Text Retrieval with Fashion Multi-Modal Conceptual Knowledge Graph
https://aclanthology.org/2023.acl-industry.16
Image-text retrieval is a core task in the multi-modal domain, which arises a lot of attention from both research and industry communities. Recently, the booming of visual-language pre-trained (VLP) models has greatly enhanced the performance of cross-modal retrieval. However, the fine-grained interactions between objects from different modalities are far from well-established. This issue becomes more severe in the e-commerce domain, which lacks sufficient training data and fine-grained cross-modal knowledge. To alleviate the problem, this paper proposes a novel e-commerce knowledge-enhanced VLP model FashionKLIP. We first automatically establish a multi-modal conceptual knowledge graph from large-scale e-commerce image-text data, and then inject the prior knowledge into the VLP model to align across modalities at the conceptual level. The experiments conducted on a public benchmark dataset demonstrate that FashionKLIP effectively enhances the performance of e-commerce image-text retrieval upon state-of-the-art VLP models by a large margin. The application of the method in real industrial scenarios also proves the feasibility and efficiency of FashionKLIP.
# Fashionklip: Enhancing E-Commerce Image-Text Retrieval With Fashion Multi-Modal Conceptual Knowledge Graph Xiaodan Wang1, Chengyu Wang2, Lei Li3, Zhixu Li1∗**, Ben Chen**2, Linbo Jin2, Jun Huang2, Yanghua Xiao1∗**, Ming Gao**3 1 Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University 2 Alibaba Group, Hangzhou, China 3 East China Normal University, Shanghai, China {xiaodanwang20,zhixuli,shawyh}@fudan.edu.cn {chengyu.wcy,chenben.cb,yuyi.jlb,huangjun.hj}@alibaba-inc.com leili@stu.ecnu.edu.cn, mgao@dase.ecnu.edu.cn ## Abstract ![0_Image_0.Png](0_Image_0.Png) Image-text retrieval is a core task in the multimodal domain, which arises a lot of attention from both research and industry communities. Recently, the booming of visionlanguage pre-trained (VLP) models has greatly enhanced the performance of cross-modal retrieval. However, the fine-grained interactions between objects from different modalities are far from well-established. This issue becomes more severe in the e-commerce domain, which lacks sufficient training data and fine-grained cross-modal knowledge. To alleviate the problem, this paper proposes a novel e-commerce knowledge-enhanced VLP model FashionKLIP. We first automatically establish a multi-modal conceptual knowledge graph from large-scale e-commerce image-text data, and then inject the prior knowledge into the VLP model to align across modalities at the conceptual level. The experiments conducted on a public benchmark dataset demonstrate that FashionKLIP effectively enhances the performance of e-commerce image-text retrieval upon stateof-the-art VLP models by a large margin. The application of the method in real industrial scenarios also proves the feasibility and efficiency of FashionKLIP. 1 ## 1 Introduction The explosive growth of multi-modal content on the Web has promoted the research of various crossmodal tasks. Image-text retrieval, which finds correlated texts (or images) for a given image (or text) (Karpathy and Fei-Fei, 2015; Faghri et al., 2017), is a popular cross-modal task with strong practical values in a wide range of industrial applications. Recently, the booming of vision-language 1All the codes and model checkpoints have been released to public in the EasyNLP framework (Wang et al., 2022). URL: https://github.com/alibaba/EasyNLP. *Corresponding author. pre-trained (VLP) models (Yao et al., 2021; Zeng et al., 2021; Li et al., 2020c) has greatly improved the representation learning across data of different modalities, leading to significant performance improvement. However, in the field of e-commerce, the imagetext retrieval task has its own challenges. Here, we suggest that image-text pairs of products have unique characteristics that are different from the general domain (such as MS-COCO (Lin et al., 2014), Flickr30k (Young et al., 2014) and Conceptual Captions (Sharma et al., 2018)), with examples shown in Figure 1. 1) While most texts in the general domain contain descriptions with complete sentence structures, descriptions or queries in e-commerce are usually composed of multiple phrases, describing product details such as materials or styles. 2) Images in the general domain usually have rich backgrounds; in contrast, a product image mainly consists of a large commodity figure in the center without a lot of background objects. These unique domain characteristics make generaldomain models difficult to be directly adopted to the image-text retrieval tasks in e-commerce. Recently, several domain-specific VLP models including FashionBERT (Gao et al., 2020), Kalei149 doBERT (Zhuge et al., 2021), CommerceMM (Yu et al., 2022), EI-CLIP (Ma et al., 2022) and Fashion-ViL (Han et al., 2022) are proposed based on e-commence image-text pairs, which greatly improve the performance of e-commerce imagetext retrieval. Despite the success, the fine-grained cross-modal alignment issue remains unsolved, which may result in the inaccurate matching of details between images and texts. Although some e-commerce VLP models use fine-grained information from either image perspectives (Han et al., 2022) or patch-based image classification (Gao et al., 2020; Yu et al., 2022), they are short of semantic-level alignments across modalities. Some other work (Ma et al., 2022; Zhu et al., 2021) focuses on entities in text modalities, but rarely considers cross-modal interactions. In the general domain, fine-grained interactions could be achieved with object detection (Li et al., 2020c; Tan and Bansal, 2019), scene graph parsing (Cui et al., 2021), or semantic analysis (Yu et al., 2021; Li et al., 2020b). Unfortunately, these tools lose their effectiveness in the e-commerce domain. To improve the fine-grained alignment between images and texts in e-commerce, this paper proposes an e-commerce knowledge-enhanced VLP model - **FashionKLIP**. Particularly, we first propose a data-driven strategy to construct a multimodal conceptual knowledge graph in e-commerce (called **FashionMMKG**) from a large-scale ecommerce image-text corpus, where the fashion concepts are automatically extracted and organized in the form of a semantic hierarchy, each associated with its representative images. The FashionMMKG is later incorporated as the prior crossmodal fashion knowledge in training a CLIP-style model to support e-commerce image-text retrieval. For model training, we learn the representation alignment of image-text pairs across the two modalities by contrastive learning, and further optimize the alignment at the conceptual level. The conceptual alignment is further obtained by matching the text representations with the visual prototype representations of the fashion concepts in FashionMMKG. Our contributions can be summarized as follows: - We innovatively propose a data-driven approach to construct a multi-modal conceptual knowledge graph in the e-commerce domain named FashionMMKG without human intervention. - We construct an e-commerce knowledgeenhanced VLP model called FashionKLIP, which learns conceptual-level alignments based on the prior knowledge in FashionMMKG. - We conduct experiments on a popular fashion benchmark dataset FashionGen (Rostamzadeh et al., 2018) and show that FashionKLIP outperforms state-of-the-art VLP models in the e-commerce domain. - We also apply the method to real industrial scenarios and observe significant improvements in image/text-to-product retrieval tasks. ## 2 Related Work Vision-Language Pre-training. VLP models can be categorized into single-stream models (Chen et al., 2020; Li et al., 2020a; Gan et al., 2020), which first concatenate multi-modal inputs for interactions, and dual-stream models (Jia et al., 2021; Radford et al., 2021; Yao et al., 2021; Li et al., 2020b), which obtain the representations of the image and text respectively and learn the alignment afterwards. Although single-stream models may lead to high retrieval accuracy due to the early fusion of images and texts, the inference efficiency is sacrificed to a certain extend. Recently, to focus more on fine-grained semantic level interactions of images and texts, some works improve the similarity strategy by calculating between the image patch and the text token (Yao et al., 2021) or leverage fine-grained image information through object detectors (Li et al., 2020c,b; Gan et al., 2020; Zeng et al., 2021). Others introduces structured scene graphs for semantic knowledge (Yu et al., 2021). Despite their success in general domain, such methods are hard to be adopted to e-commerce data. Fashion-based Retrieval. FashionBERT (Gao et al., 2020) first adopts pre-training tasks such as masking strategy to e-commerce images and texts. KaleidoBERT (Zhuge et al., 2021) extracts a series of multi-grained image patches for augmentation to guide masking strategy for fine-grained matching. CommerceMM (Yu et al., 2022) proposes pre-training tasks to align uni-modal with multi-modal features for more consistent alignment. EI-CLIP (Ma et al., 2022) defines the entityaware retrieval task from the linguistic perspective by introducing a causal model to concatenate different meta-data as e-commerce entities. Lately, ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) Fashion-ViL (Han et al., 2022) designs a flexible architecture for various downstream tasks. However, current methods still suffer from insufficient fine-grained semantic alignment, which may diminish the cross-modal understanding capability of models at semantic level. ## Methodology 3 This section introduces how FashionMMKG is constructed and how FashionKLIP incorporates conceptual-level interactions of cross-modal fashion knowledge from FashionMMKG. ## Fashionmmkg Construction 3.1 Textual Modality. Instead of building an ontologybased knowledge graph (Deng et al., 2022 ), we automatically construct FashionMMKG to alleviate the gap with real-world user queries. The construction procedures include first determining the concept set through mining massive fashion texts and then matching each concept with its corresponding images. Given a fashion dataset D { T, I } containing N image-text pairs, we first extract all the texts T . We use the NLP tool spacy 2 for sentence components analysis and part-of-speech tagging.We obtain multi-grained concept phrases by concatenating adjective modifi ers with the key word. For an input text "Heathered cotton lounge shorts in navy. Elasticized waistband with drawstring closure", we extract root concepts such as "navy", "waistband", "closure" and "heathered", as well as more detailed phrases: "cotton lounge shorts", "cotton lounge shorts in navy", "heathered cotton lounge shorts in navy", etc. Based on different conceptual hierarchical granularities of extracted results, we build up hypernym-hyponym ("is-a") relationships between concepts in the form of relation triplets by judging whether two concepts are contained by each other, such as <"cotton lounge shorts in navy", is-a, "cotton lounge shorts">. After all the relation triplets are extracted, we organize these fashion concepts in a hierarchical structure. A sub-tree with the root node "shorts" is shown in Figure 3 . The construction process of the hierarchical structure can be further implemented 2 https://spacy.io/usage/linguistic-features ![3_image_0.png](3_image_0.png) in a dynamic process. When previously unseen concepts appear, we can add these new concepts into existing hierarchical trees, as the newly updated concept "short sleeve t-shirt in white" in Figure 2. Visual Modality. For the visual modality, we adopt a prompt-based image retrieval method for each concept, and iteratively update the procedure in the subsequent visual-linguistic training process. Utilizing the generalization ability of a pre-trained CLIP-style model, we retrieve product images from the image set I, with the query formulated as "A photo of {*concept*}" as in (Radford et al., 2021; Yao et al., 2021; Gu et al., 2022). Based on the cosine distance of the image and text features, a naive approach is to select the top k images with the highest similarities as the concept visual prototype. The retrieval results of some concepts are shown in Figure 4. We can see that the top k images of coarse-grained concepts are usually visually diverse, while images tend to be more semantically consistent when it comes to more specific concepts. To ensure that both similarity and diversity of visual representations for each concept are considered, we slightly expand the range of image candidates (using a larger k), and employ the MMR algorithm (Carbonell and Goldstein, 1998) to improve the diversity of the selected images. It runs in an iterative process until a sufficient number of images are selected from the k candidates. Denote C as the candidate image set and S as the collection of images that have been selected for concept c. Each time, we choose an image vi by: $$\begin{split}MMR(v_{i})&=\underset{v_{i}\in C\backslash S}{argmax[\lambda Sim(c,v_{i})}\\ &\quad-(1-\lambda)\underset{v_{j}\in S}{max}\;Sim(v_{i},v_{j})]\end{split}\tag{1}$$ where $Sim(\cdot,\cdot)$ is the cosine similarity between where Sim(·, ·) is the cosine similarity between the corresponding text/image features, and λ is the coefficient to adjust the relevance and diversity of results. Here, we set λ = 0.8 by default. ## 3.2 Fashionklip Training During the model training, as shown in Figure 2, we first extract concepts from the texts. If there are new concepts, FashionMMKG is automatically expanded. For parameter optimization, FashionKLIP consists of two tasks: image-text contrastive learning (ITC) for matching images and texts globally, and concept-visual alignment learning (CVA) for conceptual-level cross-modal alignment. ITC. We train a CLIP-style model to learn the global representations of image-text pairs. For b image-text pairs in each training batch, denote L I k and L T k as the contrastive image-to-text and text-to-image matching loss, respectively. The ITC loss function can be expressed as L*IT C* = 1 2 Pbk=1(L I k + L T k ), with L T k to be defined as: $$L_{k}^{T}(x_{k}^{T},\{x_{j}^{I}\}_{j=1}^{b})=-l o g{\frac{e x p(s_{k,k}^{T})}{\sum_{j}e x p(s_{j,k}^{T})}}\quad(2)$$ where the corresponding text of an image x I k is x T k , and s T j,k is the cosine similarity between the image/text features of x I j and x T k . L I k is defined symmetrically to L T k . CVA. We further align concepts and visual prototypes from the FashionMMKG. For an input text x T k with image x I k , we obtain a multi-grained concept set Con(x T k ), where hypernym concepts from the tree are also introduced to avoid paying much attention to fine-grained concepts but ignoring the cross-modal understanding of high-level concepts. For a concept ci ∈ Con(x T k ), we denote S(ci) to be the collection of the selected similar yet diverse images to represent the visual characteristics of the concept (as described previously in Section 3.1). We select q images with the highest scores with image x I k in S(ci) for each ci ∈ Con(x T k ), for the model to learn conceptual alignments. We compute the weighted contrastive loss between each ci and any conceptual image x I k˜ ∈ S(ci), together with conceptual images generated from other texts concepts within the same training batch: $$\begin{array}{l}{{L_{k}^{C T}(C o n(x_{k}^{T}),\{S(x_{j}^{T})\}_{j=1}^{b})=}}\\ {{-\frac{1}{q}\sum_{c_{i}}\sum_{x_{j}^{I}\in S(c_{i})}w(x_{k}^{I},x_{k}^{I})l o g\frac{e x p(s_{k,k}^{T})}{\sum_{j}e x p(s_{j,k}^{T})}}}\\ {{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(3)}}\end{array}$$ Note that w(x k˜ , xIk ) is the cosine similarity between concept image x I k˜ and input image x I k , used as the weight for loss calculation. This forces the representation of a concept ci similar to its conceptual images S(ci), but dis-similar to those of conceptual images from other texts. Similarly, by changing the loss function from text-to-image to image-to-text, we have the symmetric loss L CI k. $\ast(x^{I},x^{I})$ i. Thus, the loss function of CVA is expressed as: $$L_{C V A}=\frac{1}{2}\sum_{k=1}^{b}(L_{k}^{C I}+L_{k}^{C T})\qquad\qquad(4)$$ Overall Loss. The total loss function is formulated as: L = 1 2 (LIT C + L*CV A*). In addition, as the representations of images are continuously updated during model training, at the end of each epoch, we leverage Faiss (Johnson et al., 2019) to retrieve top-k images to update the visual prototype representations of the matched concepts. ## 4 Experiments We conduct comprehensive evaluations on FashionGen (Rostamzadeh et al., 2018) to show that FashionKLIP outperforms SOTA methods. ## 4.1 Implementation Details We first construct FashionMMKG with details shown in Appendix A.1. Model Training. The specific settings of models are described in Appendix A.2. For training, we conduct both domain-specific pre-training and finetuning for base and large versions of FashionKLIP. We initialize FashionKLIP from CLIP pre-trained weights and continually pre-train the model based on our in-house dataset for MMKG construction (as described previously), only using the contrastive learning process over image-text pairs. Specially, the continual pre-training process is conducted with the parameters of the image encoder fixed. Overall, we have four models: FashionKLIP-S (small), FashionKLIP-M (medium), FashionKLIP-B (base) and FashionKLIP-L (large). Benchmark Dataset. We use a widely-used benchmark dataset (i.e., FashionGen (Rostamzadeh et al., 2018)) for model evaluation. It contains 67,666 fashion items of 293,008 image-text pairs in 121 sub-categories, with 260,480 pairs for training and 32,528 for validation. Evaluation. For image-text retrieval tasks, based on a text query, we consider two settings for evaluation. 1) Strictly following (Gao et al., 2020; Zhuge et al., 2021; Ma et al., 2022; Yu et al., 2022), the model is required to pick the matched image in 101 samples, including 1 ground-truth image with 100 randomly selected images within the same product sub-category (denoted as "Sample"). 2) As some recently published works (Ma et al., 2022) also consider large-scale candidates on the entire set, each query is compared with every item in the full dataset (denoted as "Full"). The settings for image-to-text matching are likewise. Recall@1/5/10 is regarded as evaluation metrics as previous works (Gao et al., 2020; Zhuge et al., 2021; Yu et al., 2022). | Method | Image-to-Text | Text-to-Image | | | | | |---------------|-----------------|-----------------|-------|-------|-------|-------| | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | FashionBERT | 23.96 | 46.31 | 52.12 | 26.75 | 46.48 | 55.74 | | KaleidoBERT | 28.00 | 60.10 | 68.40 | 33.90 | 60.50 | 68.60 | | CommerceMM | 41.60 | 64.00 | 72.80 | 39.60 | 61.50 | 72.70 | | CLIP | 36.11 | 67.81 | 80.00 | 35.32 | 65.98 | 77.84 | | EI-CLIP | 38.70 | 72.20 | 84.25 | 40.06 | 71.99 | 82.90 | | FashionKLIP-B | 60.79 | 85.67 | 91.95 | 54.00 | 78.49 | 86.28 | Table 1: Retrieval results on FashionGen (Sample). | Model | Image-to-Text | Text-to-Image | | | | | |---------------|-----------------|-----------------|-------|-------|-------|-------| | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | CLIP | 22.50 | 49.50 | 62.00 | 24.50 | 51.10 | 63.60 | | EI-CLIP | 25.70 | 54.50 | 66.80 | 28.40 | 57.10 | 69.40 | | FashionKLIP-B | 37.01 | 59.78 | 67.39 | 43.70 | 63.74 | 72.67 | Table 2: Retrieval results on FashionGen (Full). | Model | Image-to-Text | Text-to-Image | | | | | |---------------------|-----------------|-----------------|-------|-------|-------|-------| | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | FashionKLIP-S | 14.58 | 34.28 | 44.14 | 17.59 | 36.74 | 47.20 | | FashionKLIP-M 23.21 | 45.45 | 54.98 | 28.42 | 49.95 | 59.74 | | | FashionKLIP-B | 37.01 | 59.78 | 67.39 | 43.70 | 63.74 | 72.67 | | FashionKLIP-L | 47.16 | 69.27 | 75.39 | 54.60 | 75.06 | 81.39 | Table 3: Retrieval results on FashionGen (Full) of FashionKLIP under different model sizes. | Method | Eval. | Image-to-Text | Text-to-Image | | | | | |-----------------|--------------|-----------------|-----------------|-------|-------|-------|-------| | R@1 | R@5 | R@10 R@1 | R@5 | R@10 | | | | | Full Implement. | Sample 60.79 | 85.67 | 91.95 | 54.00 | 78.49 | 86.28 | | | w/o. CVA | Sample 56.70 | 84.53 | 91.65 | 51.43 | 77.44 | 85.36 | | | w/o. FDP | Sample 58.90 | 84.87 | 91.35 | 52.57 | 77.14 | 84.87 | | | Full Implement. | Full | 37.01 | 59.78 | 67.39 | 43.70 | 63.74 | 72.67 | | w/o. CVA | Full | 35.41 | 57.92 | 65.97 | 40.63 | 61.73 | 69.40 | | w/o. FDP | Full | 36.10 | 58.32 | 66.07 | 42.05 | 61.66 | 69.65 | tance of conceptual-level fashion image-text alignment, we present different variants of FashionKLIP in Table 4 for two evaluation settings. We can see from the results that both CVA and the FDP contribute to performance improvement. Although the retrieval results decrease slightly when not using FDP, the removal of CVA will harm the retrieval performance more heavily. Besides, the introduction of FDP and CVA at the same time boosts the performance as "Full Implement." shows, proving the necessity to utilize fashion data for pre-training, which helps establish a better mapping between concepts and images as prior knowledge. More importantly, the focus on fashion knowledge better guides conceptual-level interactions and brings a rise to the alignment between images and texts. ## 4.2 Experimental Results Overall Retrieval Results. We conduct both "full" and "sample" evaluation of FashionKLIP-B against existing SOTA models. In addition, we report the results of different FashionKLIP models on FashionGen using the full evaluation criteria, as shown in Table 3. As the main experimental results shown in Table 1, we can see that FashionKLIP model significantly outperforms the existing SOTA models by a large margin. In particular, on the R@1 metric, FashionKLIP-B even greatly surpasses the methods with multi-modal fusion encoders for more unified representation learning such as CommerceMM (Yu et al., 2022). On full evaluation results in Table 2, FashionKLIP-B shows a remarkable increase of 11-15% compared to EI-CLIP (Ma et al., 2022). For smaller settings such as FashionKLIP-M, the retrieval performance is also competitive and closer to CLIP. As the "full" setting is closer to real-world retrieval scenarios and more challenging as it aims to select from a large candidate set, the performance of FashionKLIP is significant, further proving that the framework can be generalized to wider application scenarios. Based on the experimental results on either setting, we can conclude the effects brought by fashion knowledge, and confirm that more attention to cross-modal conceptual-level interactions leads to an increase in e-commerce image-text matching. 3 Ablation Studies. To further analyze the impor- ## 5 Industrial Application In this section, we verify the effectiveness of FashionKLIP on our Alibaba global e-commerce platform. Specifically, we apply it to product search with two specific retrieval tasks including image-toproduct (I2P) and text-to-product (T2P) retrieval, as shown in Figure 5. | Model | Parameters | RT | QPS | |---------------|--------------|---------|-------| | CLIP | 151M | 61.26ms | 16.32 | | FashionKLIP-B | 151M | 60.45ms | 16.54 | | FashionKLIP-M | 91M | 42.69ms | 23.43 | Table 5: Average inference speed over 1,000 samples in terms of Response Rime (RT) and the Query Per Second (QPS) on a single GPU (NVIDIA V100). For T2P, we employ a weighted scoring function to compute the similarity score between a query text and a product (with a title and an image) as follows: Scoret2p = α∗*Score*t2t+(1−α)∗*Score*t2i, where 0 < α < 1, *Score*t2t and *Score*t2i refer to the embedding similarity score between the query text and the product title, together with the query text and the product image. Similarly, for I2P, we have Scorei2p = α∗*Score*i2t+ (1−α)∗*Score*i2i. 154 ![6_image_0.png](6_image_0.png) In total, the collected dataset contains 58,463 products (with images and titles) and 3,021 queries. $$\frac{\text{Muded}}{\text{CLIP}}$$ $$\text{ashionKLIP-N}$$ \begin{tabular}{|l|c|c c c c|} \hline \multicolumn{2}{c|}{**R620**} & **R61** & **R65** & **R610** & **R65** \\ \hline 96.59 & 49.43 & 75.46 & 84.27 & 8 \\ 96.44 & 48.00 & 75.56 & 84.96 & 9 \\ 98.91 & 52.10 & 79.96 & 89.02 & 9 \\ \hline \end{tabular} Model Image-to-Product **Text-to-Product** R@1 R@5 R@10 R@20 **R@1 R@5 R@10 R@20** CLIP 82.93 93.07 95.40 96.59 49.43 75.46 84.27 89.41 FashionKLIP-M 84.81 93.22 95.15 96.44 48.00 75.56 84.96 90.85 FashionKLIP-B 87.48 95.94 97.97 98.91 **52.10 79.96 89.02 93.77** Table 6: Retrieval results on e-commerce image-toproduct and text-to-product retrieval. We conduct zero-shot experiments for T2P and I2P on FashionKLIP-B and FashionKLIP-M and compare it with the baseline CLIP (Radford et al., 2021), as shown in Table 6. For models of the same size, we can see that FashionKLIP-B greatly outperforms CLIP on Recall@1-20 and particularly achieves an improvement of 3~5% on both tasks for R@1. For our model in a smaller size, FashionKLIP-M is still comparable, which mainly reflects on the R@1 and R@5 results of I2P task and the R@5 to R@20 results of T2P. However, the inference of FashionKLIP-M is faster. In Table 5, taking text-to-product as an example, we report the Response Time (RT) and Query Per Second (QPS) using different text encoders to encode user queries on a single GPU (NVIDIA V100). We can see that with similar performance (CLIP and FashionKLIPM), our model has much lower RT and higher QPS. Hence, we confirm FashionKLIP's feasibility on multi-modal tasks in the industrial applications. ## 6 Conclusion And Future Work This paper proposes a novel data-driven approach to construct a multi-modal conceptual knowledge graph in e-commerce namely FashionMMKG. An e-commerce knowledge-enhanced VLP model namely FashionKLIP is then constructed by learning the conceptual-level alignments from the prior knowledge in FashionMMKG. Our empirical study shows that FashionKLIP outperforms state-of-theart VLP models in the e-commerce domain. We conduct experiments under industrial scenarios and verify its practical value in real-world applications and confirm the efficiency of FashionKLIP. In the future, we will apply the knowledge-enhanced strategy for general large-scale pre-training and bring benefit to more multi-modal tasks. ## 7 Acknowledgement This work is funded by National Key Research and Development Project (No.2020AAA0109302), National Natural Science Foundation of China (No.62072323), Shanghai Science and Technology Innovation Action Plan (No. 22511104700, 22511105902), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103), Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902) and supported by Alibaba Group through Alibaba Innovative Research Program. ## References Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–336. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104–120. Springer. Yuhao Cui, Zhou Yu, Chunqi Wang, Zhongzhou Zhao, Ji Zhang, Meng Wang, and Jun Yu. 2021. Rosita: Enhancing vision-and-language semantic alignments via cross-and intra-modal knowledge integration. In Proceedings of the 29th ACM International Conference on Multimedia, pages 797–806. Shumin Deng, Hui Chen, Zhoubo Li, Feiyu Xiong, Qiang Chen, Mosha Chen, Xiangwen Liu, Jiaoyan Chen, Jeff Z Pan, Huajun Chen, et al. 2022. Construction and applications of open business knowledge graph. *arXiv preprint arXiv:2209.15214*. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. Vse++: Improving visualsemantic embeddings with hard negatives. *arXiv* preprint arXiv:1707.05612. Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. *Advances in Neural Information Processing Systems*, 33:6616–6628. Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, and Hao Wang. 2020. Fashionbert: Text and image matching with adaptive loss for cross-modal retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2251–2260. Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, and Chunjing Xu. 2022. Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework. arXiv preprint arXiv:2202.06767. Xiao Han, Licheng Yu, Xiatian Zhu, Li Zhang, Yi-Zhe Song, and Tao Xiang. 2022. Fashionvil: Fashionfocused vision-and-language representation learning. arXiv preprint arXiv:2207.08150. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *International Conference on* Machine Learning, pages 4904–4916. PMLR. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. *IEEE* Transactions on Big Data, 7(3):535–547. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128– 3137. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 34, pages 11336– 11344. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2020b. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. *arXiv preprint arXiv:2012.15409*. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020c. Oscar: Objectsemantics aligned pre-training for vision-language tasks. In *European Conference on Computer Vision*, pages 121–137. Springer. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In *European conference on computer vision*, pages 740–755. Springer. Haoyu Ma, Handong Zhao, Zhe Lin, Ajinkya Kale, Zhangyang Wang, Tong Yu, Jiuxiang Gu, Sunav Choudhary, and Xiaohui Xie. 2022. Ei-clip: Entityaware interventional contrastive learning for ecommerce cross-modal retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18051–18061. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Negar Rostamzadeh, Seyedarian Hosseini, Thomas Boquet, Wojciech Stokowiec, Ying Zhang, Christian Jauvin, and Chris Pal. 2018. Fashion-gen: The generative fashion dataset and challenge. arXiv preprint arXiv:1806.08317. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. *arXiv preprint arXiv:1908.07490*. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, and Wei Lin. 2022. Easynlp: A comprehensive and easy-to-use toolkit for natural language processing. Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. 2021. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In *Proceedings of the AAAI Conference* on Artificial Intelligence, volume 35, pages 3208– 3216. Licheng Yu, Jun Chen, Animesh Sinha, Mengjiao Wang, Yu Chen, Tamara L Berg, and Ning Zhang. 2022. Commercemm: Large-scale commerce multimodal representation learning with omni retrieval. In *Proceedings of the 28th ACM SIGKDD Conference on* Knowledge Discovery and Data Mining, pages 4433– 4442. Yan Zeng, Xinsong Zhang, and Hang Li. 2021. Multi-grained vision language pre-training: Aligning texts with visual concepts. *arXiv preprint* arXiv:2111.08276. Yushan Zhu, Huaixiao Zhao, Wen Zhang, Ganqiang Ye, Hui Chen, Ningyu Zhang, and Huajun Chen. 2021. Knowledge perceived multi-modal pretraining in ecommerce. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 2744–2752. Mingchen Zhuge, Dehong Gao, Deng-Ping Fan, Linbo Jin, Ben Chen, Haoming Zhou, Minghui Qiu, and Ling Shao. 2021. Kaleido-bert: Vision-language pre-training on fashion domain. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12647–12657. ## A.2 Model Settings A.3 Model Training A Appendix A.1 Fashionmmkg of items such as the number image-text pairs and concepts, and the average of some attributes (avg) such as occurrence and concept length. As for the data source, we extract fashion concepts from titles of 900,000 product image-text pairs collected from our global e-commerce platform. 4 We release models with various parameter sizes for industrial applications. The specific hyperparameters of different FashionKLIP models are shown in Table 7. Image Encoder We follow Vision Transformer (ViT) (Dosovitskiy et al., 2020) closely as the image encoder and the modifications of different models lie in the number of layer normalization and the width of attention heads. The size of nonoverlapping image patches are also set to be different. FashionKLIP-L adopts the ViT-L/14 as the image encoder with 24 layers, while FashionKLIP-M uses ViT with 12-layer 512 wide in 88M parameter and the patch size is 32. Text Encoder We adopts a Transformer (Vaswani et al., 2017), utilizing the same architecture as described in (Radford et al., 2019) as the text encoder. For models in different sizes, we refer to (Turc et al., 2019) to set the attention width and number of attention heads of the text encoder. Model Input Images are cropped uniformly to 224 × 224 pixels before entering the model. We limit the maximum input length of the text to 77, with a vocabulary of 49,408. For a fair comparision, we utilize FashionKLIPB model to compare against other baseline models, which uses ViT-B/32 (Dosovitskiy et al., 2020) as the image encoder, and adopts a 12-layer 512 wide Text Transformer as the text encoder as (Radford et al., 2021), in 63M parameter with 8 attention heads each layer. The batch size of pre-training is 1,024 per GPU with 8 A100 GPUs (80G), for 20 epoches in total. The learning rate is 5e-5. During dataset-specific model fine-tuning, we retrieve top-20 images for each concept in FashionMMKG and then select 5 images as the visual prototype based on the proposed criteria. The batch size of fine-tuning is 32 per GPU, with a learning rate of 1e-5 on two A100 GPUs. As smaller pre-trained CLIP weights 4https://www.alibaba.com/ Full statistics of our FashionMMKG are shown in Table 8, where we give both the total numbers (cnt) | Model | Embedding | Input | Vision Transformer | Text Transformer | | | | | | | |---------------|-------------|------------|----------------------|--------------------|------------|------------|--------|-------|-------|----| | dimension | resolution | parameters | layers | width | patch size | parameters | layers | width | heads | | | FashionKLIP-L | 768 | 224 | 303M | 24 | 1024 | 14 | 124M | 12 | 768 | 12 | | FashionKLIP-B | 512 | 224 | 88M | 12 | 768 | 32 | 63M | 12 | 512 | 8 | | FashionKLIP-M | 512 | 224 | 40M | 12 | 512 | 32 | 51M | 8 | 512 | 8 | | FashionKLIP-S | 384 | 224 | 22M | 12 | 384 | 16 | 33M | 8 | 384 | 6 | Table 7: Hyperparemters of FashionKLIP in different model settings. are not available, we initialize FashionKLIP-M and FashionKLIP-S models from the pre-trained FashionKLIP-B model by truncating the weights of FashionKLIP-B to the size based on the settings of smaller models. After that, we utilize the contrastive learning process for continually pretraining on the e-commerce in-house data. The batch size during pre-training for FashionKLIP-M and FashionKLIP-S is 256 per GPU on 8 GPUs and the learning rate is 5e-5. | Item Name | Statistics | |--------------------------|-----------------| | Image-text pairs (cnt) | 900,000 | | Root-concepts (cnt) | 5,135 | | All concepts (cnt) | 99,076 | | Nodes per tree (avg) | 213.8 (1∼25600) | | Concept length (avg) | 3.4 (1∼21) | | Occurrence (avg) | 17.1 (1∼77250) | | Images per concept (avg) | 20 | | All images (cnt) | 76,964 | Table 8: Statistics of FashionMMKG.
rubin-etal-2023-entity
Entity Contrastive Learning in a Large-Scale Virtual Assistant System
https://aclanthology.org/2023.acl-industry.17
Conversational agents are typically made up of domain (DC) and intent classifiers (IC) that identify the general subject an utterance belongs to and the specific action a user wishes to achieve. In addition, named entity recognition (NER) performs per token labeling to identify specific entities of interest in a spoken utterance. We investigate improving joint IC and NER models using entity contrastive learning that attempts to cluster similar entities together in a learned representation space. We compare a full virtual assistant system trained using entity contrastive learning to a production baseline system that does not use contrastive learning. We present both offline results, using retrospective test sets, as well as live online results from an A/B test that compared the two systems. In both the offline and online settings, entity contrastive training improved overall performance against production baselines. Furthermore, we provide a detailed analysis of learned entity embeddings, including both qualitative analysis via dimensionality-reduced visualizations and quantitative analysis by computing alignment and uniformity metrics. We show that entity contrastive learning improves alignment metrics and produces well-formed embedding clusters in representation space.
Entity Contrastive Learning in a Large-Scale Virtual Assistant System Jonathan Rubin Amazon Alexa AI, USA jonrubin@amazon.com Jason Crowley Amazon Alexa AI, USA jascrowl@amazon.com George Leung∗ Novo Nordisk A/S, USA kwlg@novonordisk.com Morteza Ziyadi Amazon Alexa AI, USA mziyadi@amazon.com Maria Minakova Amazon Alexa AI, USA minakova@amazon.com ## Abstract Conversational agents are typically made up of domain (DC) and intent classifiers (IC) that identify the general subject an utterance belongs to and the specific action a user wishes to achieve. In addition, named entity recognition (NER) performs per token labeling to identify specific entities of interest in a spoken utterance. We investigate improving joint IC and NER models using entity contrastive learning that attempts to cluster similar entities together in a learned representation space. We compare a full virtual assistant system trained using entity contrastive learning to a baseline system that does not use contrastive learning. We present both offline results, using retrospective test sets, as well as online results from an A/B test that compared the two systems. In both the offline and online settings, entity contrastive training improved overall performance against baseline systems. Furthermore, we provide a detailed analysis of learned entity embeddings, including both qualitative analysis via dimensionalityreduced visualizations and quantitative analysis by computing alignment and uniformity metrics. We show that entity contrastive learning improves alignment metrics and produces wellformed embedding clusters in representation space. ## 1 Introduction Named Entity Recognition (NER) is a well-studied and fundamental task within Natural Language Understanding (NLU). The performance of a virtual assistant is heavily dependent upon how well NER tasks are handled. Mistaken slot predictions result in propagating incorrect information to downstream modules, causing sub-optimal interactions with users of the system. Contrastive learning can be used to improve NER model training. Contrastive learning attempts to cluster similar inputs closer together in their representation space and ∗Work done during the author's tenure at Amazon. ![0_image_0.png](0_image_0.png) repel dissimilar inputs apart. Token contrastive learning attracts and repels representations at the token level and was introduced in (Das et al., 2022) for improving performance in few-shot NER tasks. In this work, we apply contrastive learning to improve the performance of a ubiquitous virtual assistant system. We first train a common encoder using contrastive sentence embedding (Gao et al., 2021). Next, we incorporate *entity contrastive learning*, based on (Das et al., 2022), to better cluster similar entities together in representation space. We train and evaluate joint IC and NER models in 11 domains. For each domain, we evaluate performance with and without an additional entity contrastive loss. We further provide results of an online A/B test that measures user satisfaction and show improved performance when using entity contrastive training. Furthermore, we perform a detailed embeddings analysis to determine the effect that the entity contrastive loss function has on entity representations. In particular, we compute alignment and uniformity metrics (Wang and Isola, 2020) of learned entity representations. Finally, we also present qualitative results in the form of t-SNE visualizations comparing models with entity contrastive training vs. without. We show that entity contrastive learning improves alignment metrics as 159 well as clustering behavior in representation space. ## 2 Virtual Assistant System Overview Fig. 1 shows a schematic overview of a jointly trained IC and NER model that makes up part of the NLU component of a full virtual assistant system. Joint IC-NER models are trained separately for each domain. The IC-NER model encodes a sequence of (sub-word) utterance tokens, x1, x2*, . . . , x*n, through a transformer encoder architecture, [h1, h2*, . . . , h*n] = TEncoder ([x1, x2*, . . . , x*n]). In addition to sub-words that are fed to the encoder, each input token is also flagged as either being recognized or unrecognized via lookup in a large gazetteer, ϕ(·) ∈ {0, 1}, which further undergoes a separate gazetteer-based embedding, [g1, g2*, . . . , g*n] = GEmbedding ([ϕ(x1), ϕ(x2)*, . . . , ϕ*(xn)]). Gazetteer embeddings are then combined with the output embeddings of the encoder, [t1, t2*, . . . , t*n] = [h1 ⊗ g1, h2 ⊗ g2*, . . . , h*n ⊗ gn], where ⊗ is the element-wise product. These embeddings are then used by both the IC and NER model heads. ## 2.1 Joint Ic And Ner Training The intent classification head accepts a single aggregated embedding that it processes through a collection of linear layers. Its loss function is the standard categorical cross entropy loss, ℓCE = −PK k y (k)log ˆy (k), where K is the total number of intent classes per domain, y (k)is 0 or 1 ground truth for intent class, k, and yˆ (k)is the predicted value for that intent. The NER head accepts all embeddings and performs per token classification. Our NER model employs a conditional random field (CRF) to optimize the sequence labeling task: $$\begin{array}{c}{{p(s_{1}\ldots s_{n}\mid t_{1}\ldots t_{n};\mathbf{w})=}}\\ {{\exp(\mathbf{w}\cdot\Phi(t_{1}\ldots t_{n},s_{1}\ldots s_{n}))}}\\ {{\sum s_{1}^{\prime}...s_{n}^{\prime}\in{\mathcal{S}}^{m}\exp\left(\mathbf{w}\cdot\Phi(t_{1}\ldots t_{n},s_{1}^{\prime}\ldots s_{n}^{\prime})\right.}}\\ {{\ell_{C R F}=-\sum_{i=1}^{M}\log p_{i}(s_{1}\ldots s_{n}\mid t_{1}\ldots t_{n};\mathbf{w})}}\\ {{i=1}}\end{array}$$ where w are learnable weights, M is the number of utterances, Sm is the space of all possible sequences and Φ(ti..., s*i...*) is the product of selected potential functions that reflects the plausibility score of a given labeling, see (Lafferty et al., 2001) for further details. ## 2.2 Entity Contrastive Training When employing entity contrastive training, a third loss component is added to model training, as described in (Das et al., 2022). Diagonal Gaussian embeddings, N (µi, Σi), are created by passing each encoded token representation, ti, through separate networks, µi = fµ(ti) and Σi = ELU(fΣ(ti)) + (1 + ϵ). These networks respectively infer the mean and variance of the Gaussian embeddings. Here, ELU is the Exponential Linear Unit and ϵ is added for numerical stability. Gaussian embeddings map tokens to densities rather than point vectors and have been shown to better capture representation uncertainty (Vilnis and McCallum, 2015). As the KL divergence between two diagonal Gaussian distributions has a closed form solution, a pair of tokens from a collection of utterances can be evaluated as follows (note that l is the embedding dimension): DKL [N (µq, Σq)|| N (µp, Σp)] = 1 2 Tr Σ −1 p Σq − l + log |Σp| |Σq| + (µp − µq) T Σ −1 p(µp − µq) (1) Further, as the KL divergence is not symmetric, both forward and reverse directions are considered: d(*p, q*) = 12 (DKL [Nq∥Np] + DKL [Np∥Nq]). Given a collection of entities and their labels within a batch, (xq, yq) ∈ X , a set of in-batch matching entities, Xp, can be constructed by locating different tokens that share the same entity label (yp = yq, where p ̸= q). The final ℓENT loss is constructed for each entity, p, in a batch, X , as follows: **Remarks.** $$\ell_{ENT}=-\frac{1}{|\mathcal{X}|}\sum_{p\in\mathcal{X}}\log\frac{\sum_{(x_{q},y_{q})\in\mathcal{X}_{p}}\exp(-d(p,q))/\left|\mathcal{X}_{p}\right|}{\sum_{(x_{q},y_{q})\in\mathcal{X},p\neq q}\exp(-d(p,q))}\tag{2}$$ ## 2.3 Overall Loss Function The final loss function is a linear combination of the cross entropy loss of the intent classifier, the CRF loss given by the NER output and the entity contrastive loss: Loverall = w1 · ℓCE +w2 · ℓCRF +w3 · ℓENT (3) where w1 *. . . w*3 are hyper-parameters that weight each of the individual loss components. In our experiments we set each wi = 1. ↓ Lower is better Profile 1 **Profile 2** Contrastive Encoder (%) Entity Contrastive (%) Contrastive Encoder (%) Entity Domain Contrastive (%) Global -19.43 **-19.91** -17.55 **-18.19** Music -7.79 **-11.77** -8.11 **-11.71** Notifications -14.38 **-17.20** -12.37 **-16.32** Video -14.18 **-17.02** -6.23 **-9.24** Shopping **-14.29** -7.19 **-11.63** -8.08 Local Search -15.34 **-23.94** -16.42 **-25.17** General Media -17.30 **-17.63** -18.23 **-18.28** Calendar **-3.21** -0.96 **-6.76** -4.50 Books -11.93 **-17.19** -8.34 **-14.76** Cinema Show Times **-1.78** +17.08 **-13.87** +13.87 Sports -0.02 -0.02 **-12.00** -11.97 ## 2.4 Implementation Details We use a BERT (Devlin et al., 2019) style encoder with embedding dimension 768 and Gaussian embedding dimension 128. The encoder is made up of 4 hidden layers with 16 attention heads. The encoder's weights are first initialized via a task-specific model distillation procedure (Citation anonymized due to self-reference). Encoder weights are further fine-tuned using contrastive sentence embedding (Gao et al., 2021), where a single positive utterance is contrasted with 10 negative utterances. The fine-tuned encoder is common and shared between domains. Each domain's IC-NER model is then further trained for a maximum of 60 epochs and early stopping was invoked if there was no improvement in validation error rate for 4 epochs. ## 3 Experimental Results We provide experimental results in the following three settings: Offline (per domain): We compare 11 domain models trained using entity contrastive learning vs. baseline models without entity contrastive training. All domains that utilize gazetteers are included. Offline (full system): We compare a full virtual assistant system trained using entity contrastive learning against a baseline system on a collection of static test-sets. Online: We conduct an A/B test using live traffic to compare a full virtual assistant system trained using entity contrastive learning vs. a baseline model that does not. Full descriptions of each error metric used for (offline) evaluation are given in Appendix A. We provide brief summaries here: SEMER: Semantic Error Rate reflects the proportion of incorrectly labeled entities and intents. ICER: Intent Classification Error Rate measures the proportion of misclassified intents IRER: Intent Recognition Error Rate measures how often predictions contain any mistakes in either entities or intent. ## 3.1 Offline (Per Domain) Results Table 1 shows per domain relative improvement SEMER results compared to a live baseline model that doesn't utilize entity contrastive training. Lower results are better. Two candidate models are compared: 1) **Contrastive Encoder**, where only the encoder was pre-trained using supervised sentence contrastive learning based on (Gao et al., 2021) and 2) **Entity Contrastive**, which builds on top of 1), and further trains using the entity contrastive loss function from Section 2.2. Results are shown for two virtual assistant profiles. Profile 1 is a voice only system, whereas Profile 2 is an assistant that has a display monitor. Cinema Show Times was the only domain that did worse than the baseline when using entity contrastive training. This may be due to the relatively large number of entity types (33) and the smaller training and validation dataset size (30,311 and 3,368, respectively). Appendix B lists the total | Profile 1 | SEMER ↓ | ICER ↓ | IRER ↓ | |-----------------------------|-----------|----------|----------| | Contrastive Encoder | -10.7% | -16.2% | 7.9% | | Entity Contrastive Training | -12.7% | -17.5% | -10.7% | | Profile 2 | SEMER ↓ | ICER ↓ | IRER ↓ | | Contrastive Encoder | -9.2% | 14.6% | 6.6% | | Entity Contrastive Training | -11.0% | -16.2% | -9.0% | Drules ↓ Dstat ↓ Dstat-tail ↓ Global 0.03 1.97 1.10 Music -1.85†-0.01†-0.06† Shopping -13.09†-8.27†-8.72† Video 7.48† 1.89† 2.40† Overall -0.79†**-0.55 -0.68**† number of utterances in both training and validation datasets, as well as the number of entities labels for all 11 domains. All other domains improved against the baseline. Overall, **entity contrastive** training out-performed **contrastive encoder** training in 8 out of 11 domains for Profile 1 and 7 out of 11 domains for Profile 2. Furthermore, **entity** contrastive training achieved the best results for the top four highest-traffic domains in both profiles. ## 3.2 Offline (Full System) Results Table 2 shows overall relative improvement against a baseline system measured using SEMER, ICER and IRER metrics. Once again we compare a virtual assistant system that trained a contrastive encoder only vs. full entity contrastive training. We see that entity contrastive training leads to larger relative improvement, compared to contrastive encoder training only, for all metrics. ## 3.3 Online (A/B Test) Results The final set of results we present were collected from an A/B test using an experimentation platform to evaluate full virtual assistant systems on live customer traffic. Once again we compare a system that uses entity contrastive training against a baseline model that does not. The experimental (contrastive) and control (baseline) model each received 10% of customer traffic and the A/B test ran for two weeks. As no ground truth is available for online data, we rely on a rule based system (Drules), and a statistical model (Dstat) that infers user dissatisfaction given a virtual assistant's response. We also measure user dissatisfaction specifically for tail traffic, i.e. the bottom 40% of frequent utterances (Dstat-tail). Results are presented as relative comparisons to the baseline system in Table 3. Per-domain results are included for domains of special interest, including those with higher traffic volumes. The overall results, in the final row, evaluate the full virtual assistant system on all domains. Lower results are better. Overall the experimental contrastive model improved all user dissatisfaction metrics. Results are statistically significant at the 95% confidence level (p < 0.05) for Drules and Dstat-tail and just outside the range for Dstat (p = 0.058). Per-domain results show that the baseline model outperformed the experimental model for Global (however not statistically significantly, p > 0.05), and Video (p < 0.05). Further analysis showed that the experimental model likely incorrectly predicted the Video domain on device profiles that didn't have display capability. The largest improvements were observed in the Shopping domain (p < 0.05) and there are also improvements in Music (although not statistically significant, p > 0.05). ## 4 Embeddings Analysis We further provide qualitative and quantitative analysis of the entity representations learned by a baseline and contrastive model. The baseline model differs to the contrastive model only by removing the ℓENT component of the loss function in Eqn. (3). | Domain | Baseline ↓ | Contrastive ↓ | |-------------------|--------------|-----------------| | Video | 0.95 | 0.28 | | Sports | 0.41 | 0.54 | | Shopping | 0.85 | 0.14 | | Notifications | 0.84 | 0.21 | | Music | 1.03 | 0.27 | | Local Search | 1.00 | 0.30 | | Global | 0.77 | 0.28 | | General Media | 0.89 | 0.18 | | Cinema Show Times | 0.71 | 0.28 | | Calendar | 0.83 | 0.15 | | Books | 0.85 | 0.15 | | Average | 0.83 | 0.25 | Table 4: Alignment scores per domain comparing baseline vs. contrastive NER learning. ↓ Lower is better. ## 4.1 Qualitative: Dimensionality Reduction Visualization For each domain, we derived t-distributed stochastic neighbor embedding (t-SNE) (Van der Maaten and Hinton, 2008) plots to visualize entity representations learned by the baseline and contrastive model. Embeddings were pulled from a randomized subset of validation data. Dimensionality reduction took place on the µi representations learned by each model, R 128 → R 2. Fig. 2 shows a comparison between the baseline and contrastive model for four domains (a) Calendar, (b) Music, (c) Notifications and (d) Video. Appendix D displays t-SNE plots for the remaining domains. Looking at Fig. 2(a) for the Calendar domain, we can see that points for the most frequent entity type (Date) don't appear to cluster at all and are quite dispersed in the t-SNE plot on the left (baseline). However, in the plot on the right (contrastive) we see a well-formed cluster for Date in the top right. We also notice in Fig. 2(b) for the Music domain, the most frequent entity type (SongName) exhibits some clustering behavior in the baseline, but forms multiple distinct clusters in the contrastive model. We can also easily see points that did not have the SongName label within these clusters. In particular, there are many overlapping points for AlbumName, ArtistName and Lyrics. AlbumName and Lyrics can likely overlap with SongName and cause confusion for the model. Given that the data-set is very large, annotation errors are also frequent and it is possible these overlapping points could potentially identify errors in the labeling process. ## 4.2 Quantitative: Alignment And Uniformity We further analyze representation quality using the quantitative metrics of alignment and uniformity introduced in (Wang and Isola, 2020). The alignment metric assumes a distribution of positive pairs and calculates expected distance between representations of these pairs. Positive pairs should lie closer together in representation space and produce lower values. Conversely, uniformity measures how well learned representations are distributed uniformly on a unit hyper-sphere for instances from all classes. Given that we do not rely on positive pairs, but instead wish to align token representations belonging to the same class (i.e. has the same entity label), we slightly alter the original alignment metric to consider all non self-referential, pairwise comparisons between instances that belong to the same class, pcls|x̸=y. The uniformity metric remains the same as in (Wang and Isola, 2020). We set hyperparameters as follows, α = 2 and t = 2. $$\mathcal{M}_{\text{align}}(f;\alpha)=\mathbb{E}_{x,y\sim p_{cls}|x\neq y}\left[||f(x)-f(y)||_{2}^{\alpha}\right]\tag{4}$$ $$\mathcal{M}_{\text{uniform}}\left(f;t\right)=\log\mathbb{E}_{x,y\sim p_{\text{data}}}\left[e^{-t||f(x)-f(y)||_{2}^{2}}\right]\tag{5}$$ Table 4 shows alignment values per domain. The values in Table 4 are computed by taking the average alignment scores for all entities within each domain. Alignment values for each entity type are given in Appendix C. A weighted average is taken that considers the number of tokens with a given entity label. Lower values imply better alignment between representations within the same class. We can see in Table 4 that all domains have lower alignment values with entity contrastive training, except for the Sports domain. The Sports domain has the least amount of training data and entity types (see Appendix B), which may be the reason that entity contrastive training does not result in improvement over the baseline model. Finally, we compute uniformity metrics. To reduce computational cost, we randomly sample 10% of the entity embeddings. The uniformity scores for the baseline and contrastive models were −3.54 and −3.11, respectively, indicating that the baseline model produced embeddings that are likely more uniformly distributed than the contrastive model. ![5_image_0.png](5_image_0.png) ## 5 Related Work Contrastive learning has been applied with tremendous success over the last few years in tasks that process data such as audio (Oord et al., 2018), vision (Chen et al., 2020) and natural language (Fang et al., 2020). Contrastive losses, such as InfoNCE (Oord et al., 2018; Hénaff et al., 2019), build on the original idea of noise contrastive estimation (Gutmann and Hyvärinen, 2010; Mnih and Kavukcuoglu, 2013) that learns a data distribution by comparing it against a chosen noise distribution. Contrastive representation learning can either be unsupervised (Chen et al., 2020; He et al., 2020) or supervised (Khosla et al., 2020). Unsupervised or self-supervised approaches have relied upon techniques such as data augmentation (Chen et al., 2020; He et al., 2020) and future self prediction (Oord et al., 2018) as a way of ignoring superfluous information to learn better class representations. Supervised approaches (Khosla et al., 2020) incorporate class label information during learning and were introduced to avoid problems with in-batch false positives. In natural language tasks, contrastive learning approaches based on data augmentation techniques have not fared as well compared to their vision counterparts. SimCSE (Gao et al., 2021) introduced both unsupervised and supervised approaches for learning contrastive sentence embeddings. The unsupervised approach relies solely on varying dropout masks to achieve different representations of the same input sentence, whereas the supervised task uses examples from natural language inference datasets (Conneau et al., 2017). Rather than learning sentence embeddings, (Das et al., 2022) introduced token contrastive learning in the context of improving few-shot learning. Our work does not focus on few-shot learning, but instead seeks to evaluate joint IC-NER models trained with entity contrastive learning for the purpose of improving a large-scale virtual assistant system. ## 6 Conclusion We presented jointly trained IC and NER models augmented with entity contrastive learning via an additional loss function that attempts to pull similar entities together in representation space, and repel dissimilar entities apart. We provided a comprehensive evaluation of entity contrastive learning within a full virtual assistant system by comparing to baselines in both offline and online (A/B test) experiments. Results show that employing entity contrastive learning improves overall error and alignment metrics and produces well-formed embedding clusters in representation space. ## References Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In *Proceedings of* the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338–6353, Dublin, Ireland. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 297–304. JMLR Workshop and Conference Proceedings. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pages 9729–9738. Olivier J. Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aäron van den Oord. 2019. Data-efficient image recognition with contrastive predictive coding. *CoRR*, abs/1905.09272. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. *Advances in Neural* Information Processing Systems, 33:18661–18673. John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In *International Conference on Machine Learning*. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. *Advances in neural information processing systems*, 26. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. *Journal of machine* learning research, 9(11). Luke Vilnis and Andrew McCallum. 2015. Word representations via gaussian embedding. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929–9939. PMLR. ## A Performance (Error) Metrics The error metrics used to assess offline performance are as follows: SEMER: Semantic Error Rate evaluates slot-filling and intent classification performance jointly, as follows: \# Deletion + \# Insertion + \# Substitution \# Correct + \# Deletion + \# Substitution Deletion occurs when the slot name is present in ground truth but not in the prediction. Insertion is the opposite when extra slot names are included in the prediction. Substitution errors occur when predictions do match ground truth slot labels, but for an incorrect slot value. Correct slots are when both the slot name and slot value match. Intent classification errors are also counted as substitution errors above. ICER: Intent Classification Error Rate measures the rate at which the intent of utterances are incorrectly predicted: ICER = \# Incorrect Intents \# Total Utterances 165 IRER: Intent Recognition Error Rate measures how often predictions contain any mistake in either slots or intent. $$\mathrm{IRER}={\frac{\#\ \mathrm{Inch}}{\#}}$$ $$\frac{\#\text{Incorrect(Slot or Intent)}}{\#\text{Total Outcomes}}$$ ## B Dataset Sizes Table 5 shows the dataset sizes (training and validation) for 11 gazetteer based domains. Also depicted are the total number of entities per domain. Domains are listed in descending order based on number of utterances. Global is the largest domain and Sports is the smallest. ## C Alignment Tables Per Domain Alignment scores per slot are shown for each domain in Tables 6 to 16. The baseline model includes no entity contrastive training. Results are restricted to the top ten most frequent slots due to display purposes. The missing remaining slots exhibit similar trends to those shown. Size refers to the number of tokens with a given slot label and Score is the alignment score. Lower is better. The final column shows relative change as a percentage. Negative values show improvement of the contrastive model over the baseline. ## D T-Sne Visualizations Figs. 3 and 4 depict the remaining t-SNE plots not shown in the main body of the text. Once again, for each domain, the baseline embeddings are on the left and the contrastive model embeddings are on the right. As in the figures in the main body, nonentity (O) tokens are removed as they are not subject to contrastive training and legend entries are restricted to the top 20 most frequent slot labels with counts shown in parentheses. Alignment scores are also shown. As with the figures in the main body, we see improved clustering behavior in the contrastive embeddings compared to the baseline embeddings in all domains, except for the Sports domain, which is quite sparse. It is also possible that the perplexity value (which depends on dataset size) is not optimal for the sports domain due to the smaller dataset size. | Domain | Training instances | Validation instances | Number of entities | |-------------------|----------------------|------------------------|----------------------| | Global | 3,165,309 | 351,702 | 117 | | Music | 2,160,488 | 240,055 | 119 | | Notifications | 818,963 | 90,996 | 62 | | Video | 686,520 | 76,280 | 63 | | Shopping | 602,748 | 66,972 | 54 | | Local Search | 294,098 | 32,678 | 75 | | General Media | 167,776 | 18,642 | 30 | | Calendar | 137,313 | 15,258 | 46 | | Books | 125,139 | 13,905 | 50 | | Cinema Show Times | 30,311 | 3,368 | 33 | | Sports | 21,347 | 2,372 | 13 | | Total | 8,210,012 | 912,228 | 662 | Calendar Baseline Contrastive Slot Size Score ↓ Size Score ↓ % change Date 22860 1.08 27165 0.23 -78.21 EventName 21047 0.99 24815 0.14 -85.61 Time 12304 0.72 14510 0.06 -91.89 DataSource 8704 0.37 10415 0.01 -96.92 OrigTime 4087 0.20 4774 0.03 -83.13 EventType 3571 0.77 4212 0.17 -77.91 OrigDate 2096 0.62 2434 0.06 -89.63 ActiveUserTrigger 2017 0.79 2380 0.18 -77.19 VisualModeTrigger 743 0.61 903 0.10 -82.95 CalendarName 674 1.03 795 0.56 -45.65 | Music | Baseline | Contrastive | | | | |--------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | SongName | 20743 | 1.10 | 20637 | 0.26 | -76.18 | | ArtistName | 16567 | 0.98 | 16469 | 0.21 | -78.55 | | MediaType | 12730 | 1.08 | 12690 | 0.14 | -87.52 | | GenreName | 7119 | 1.08 | 7080 | 0.43 | -59.86 | | AlbumName | 4709 | 1.13 | 4696 | 0.53 | -53.33 | | AppName | 2015 | 1.21 | 2013 | 0.28 | -77.14 | | PauseTrigger | 2000 | 0.55 | 1994 | 0.10 | -82.58 | | PlaylistName | 1749 | 1.15 | 1744 | 0.46 | -59.87 | | OnType | 1459 | 0.84 | 1455 | 0.02 | -98.10 | | Time | 1335 | 0.74 | 1331 | 0.03 | -96.18 | | Notifications | Baseline | Contrastive | | | | |--------------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | OnType | 42129 | 0.92 | 41911 | 0.19 | -79.61 | | Time | 13185 | 0.76 | 13159 | 0.18 | -76.72 | | Duration | 11145 | 0.72 | 11082 | 0.19 | -73.28 | | NotificationLabel | 4706 | 0.96 | 4680 | 0.18 | -81.83 | | Date | 2989 | 0.86 | 2975 | 0.57 | -33.90 | | NotificationStatus | 1303 | 0.72 | 1303 | 0.01 | -98.13 | | EndTime | 1112 | 0.53 | 1107 | 0.21 | -60.03 | | ActiveUserTrigger | 866 | 0.45 | 862 | 0.02 | -96.29 | | Quantifier | 601 | 0.66 | 599 | 0.30 | -54.46 | | EndDate | 418 | 0.39 | 416 | 0.13 | -66.83 | Table 8: Alignment scores per slot for Notifications domain - baseline vs. contrastive. Results are restricted to the top ten most frequent slots due to display purposes. Video Baseline Contrastive Slot Size Score ↓ Size Score ↓ % change VideoName 44095 0.94 43840 0.27 -71.26 ChannelName 8298 0.87 8268 0.16 -81.68 GenreName 7118 1.19 7103 0.63 -47.49 MediaType 5986 1.02 5945 0.09 -90.99 AppName 5151 0.96 5146 0.03 -96.84 VisualModeTrigger 3906 0.88 3891 0.11 -87.14 CharacterName 3302 0.86 3292 0.65 -24.85 ActorName 1968 0.87 1954 0.24 -72.31 PersonName 1841 0.98 1838 0.54 -44.64 Device 1114 0.95 1111 0.28 -70.81 | Shopping | Baseline | Contrastive | | | | |---------------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | ItemName | 51246 | 0.91 | 53039 | 0.10 | -88.83 | | ShoppingListType | 5685 | 0.71 | 5876 | 0.28 | -61.31 | | ProductSortType | 4632 | 0.91 | 4797 | 0.30 | -66.96 | | VisualModeTrigger | 2430 | 0.46 | 2527 | 0.04 | -92.11 | | ShoppingServiceName | 1179 | 0.70 | 1210 | 0.06 | -91.54 | | RecommendTrigger | 1118 | 0.41 | 1158 | 0.23 | -44.14 | | DealType | 628 | 0.54 | 650 | 0.25 | -54.14 | | Anaphor | 471 | 0.64 | 481 | 0.39 | -38.35 | | Quantifier | 451 | 0.70 | 470 | 0.27 | -61.93 | | PurchaseDate | 388 | 0.64 | 450 | 0.47 | -27.03 | | Local Search | Baseline | Contrastive | | | | |----------------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | PlaceName | 42865 | 1.08 | 39231 | 0.22 | -79.71 | | PlaceType | 10667 | 1.12 | 9756 | 0.35 | -68.90 | | DestinationPlaceName | 10162 | 0.92 | 9315 | 0.19 | -79.43 | | LocationSortType | 6723 | 1.11 | 6196 | 0.24 | -78.34 | | City | 6082 | 1.09 | 5594 | 0.30 | -72.69 | | Location | 3745 | 1.06 | 3402 | 0.77 | -27.54 | | DestinationLocation | 3575 | 0.93 | 3233 | 0.30 | -67.28 | | Anaphor | 2611 | 0.98 | 2400 | 0.71 | -27.91 | | Date | 2292 | 0.91 | 2137 | 0.36 | -60.37 | | PlaceFeature | 2282 | 1.20 | 1999 | 0.64 | -46.65 | | General Media | Baseline | Contrastive | | | | |-------------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | AppName | 48421 | 0.91 | 48231 | 0.17 | -81.31 | | MediaType | 4697 | 0.88 | 4685 | 0.14 | -84.14 | | VisualModeTrigger | 1548 | 0.55 | 1548 | 0.05 | -90.17 | | GenreName | 657 | 1.04 | 651 | 0.91 | -12.72 | | SettingValue | 560 | 0.68 | 555 | 0.43 | -36.53 | | SortType | 392 | 0.69 | 392 | 0.12 | -83.22 | | Anaphor | 219 | 0.85 | 219 | 0.44 | -48.91 | | DeviceBrand | 218 | 0.75 | 218 | 0.20 | -72.92 | | ListPosition | 192 | 0.73 | 192 | 0.24 | -67.19 | | DeviceType | 89 | 0.71 | 89 | 0.49 | -31.13 | | Global | Baseline | Contrastive | | | | |-------------------|------------|---------------|------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | Setting | 3591 | 0.82 | 2819 | 0.12 | -85.10 | | MediaType | 2536 | 0.78 | 2091 | 0.11 | -85.87 | | DeviceType | 2125 | 0.80 | 1765 | 0.22 | -71.97 | | DeviceBrand | 1971 | 0.70 | 1597 | 0.06 | -91.48 | | ChannelName | 1230 | 0.73 | 1072 | 0.15 | -80.21 | | SearchContent | 1038 | 0.84 | 858 | 0.21 | -75.25 | | SettingValue | 927 | 0.85 | 751 | 0.54 | -36.92 | | DeviceLocation | 558 | 0.81 | 471 | 0.38 | -52.94 | | VisualModeTrigger | 544 | 0.66 | 420 | 0.04 | -93.17 | | ServiceName | 535 | 0.82 | 411 | 0.33 | -60.40 | | Books | Baseline | Contrastive | | | | |-------------------|------------|---------------|-------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | BookName | 34890 | 1.02 | 37703 | 0.16 | -84.24 | | MediaType | 23127 | 0.80 | 24870 | 0.08 | -89.76 | | ServiceName | 18965 | 0.77 | 20496 | 0.04 | -94.93 | | AuthorName | 4714 | 0.94 | 5075 | 0.42 | -54.84 | | ActiveUserTrigger | 3837 | 0.44 | 4170 | 0.03 | -93.59 | | GenreName | 3056 | 0.97 | 3313 | 0.61 | -36.46 | | SortType | 2994 | 0.56 | 3271 | 0.20 | -64.82 | | SectionType | 2078 | 0.90 | 2321 | 0.04 | -95.76 | | Narrator | 2057 | 0.60 | 2265 | 0.29 | -51.25 | | Anaphor | 1728 | 1.01 | 1861 | 0.29 | -71.08 | | Cinema Show Times | Baseline | Contrastive | | | | |---------------------|------------|---------------|------|---------|----------| | Slot | Size | Score ↓ | Size | Score ↓ | % change | | MovieTitle | 25800 | 1.06 | 7134 | 0.36 | -66.04 | | EndTime | 18795 | 0.19 | 5206 | 0.02 | -90.80 | | MediaType | 17210 | 0.66 | 4749 | 0.17 | -73.88 | | PlaceName | 9413 | 0.97 | 2594 | 0.31 | -67.74 | | Time | 8733 | 0.25 | 2413 | 0.04 | -82.28 | | Date | 4810 | 0.88 | 1314 | 0.71 | -19.28 | | PlaceType | 2679 | 0.88 | 751 | 0.23 | -73.53 | | SortType | 2190 | 0.92 | 602 | 0.30 | -67.00 | | City | 1828 | 0.86 | 502 | 0.96 | 10.82 | | PostalCode | 1708 | 0.69 | 478 | 0.07 | -89.20 | Sports Baseline Contrastive Slot Size Score ↓ Size Score ↓ % change Date 739 0.37 721 0.57 55.33 SortType 130 0.62 124 0.43 -31.23 VisualModeTrigger 61 0.42 58 0.63 52.68 SportsRole 19 0.61 19 0.82 34.47 Time 18 0.37 18 0.02 -93.37 Sport 10 0.49 10 0.01 -98.04 League 4 0.04 4 0.00 -98.81 Anaphor 2 0.03 2 0.00 -97.17 Table 16: Alignment scores per slot for Sports domain - baseline vs. contrastive. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png)
cheng-etal-2023-tab
Tab-Cleaner: Weakly Supervised Tabular Data Cleaning via Pre-training for {E}-commerce Catalog
https://aclanthology.org/2023.acl-industry.18
Product catalogs, conceptually in the form of text-rich tables, are self-reported by individual retailers and thus inevitably contain noisy facts. Verifying such textual attributes in product catalogs is essential to improve their reliability. However, popular methods for processing free-text content, such as pre-trained language models, are not particularly effective on structured tabular data since they are typically trained on free-form natural language texts. In this paper, we present Tab-Cleaner, a model designed to handle error detection over text-rich tabular data following a pre-training / fine-tuning paradigm. We train Tab-Cleaner on a real-world Amazon Product Catalog table w.r.t millions of products and show improvements over state-of-the-art methods by 16{\textbackslash}{\%} on PR AUC over attribute applicability classification task and by 11{\textbackslash}{\%} on PR AUC over attribute value validation task.
# Tab-Cleaner: Weakly Supervised Tabular Data Cleaning Via Pre-Training For E-Commerce Catalog Kewei Cheng1, Xian Li2, Zhengyang Wang2, Chenwei Zhang2**, Binxuan Huang**2 Yifan Ethan Xu3, Xin Luna Dong3 **and Yizhou Sun**1 1University of California, Los Angeles 2Amazon, 3Meta AI {viviancheng,yzsun}@cs.ucla.edu, {xianlee, zhengywa, cwzhang, binxuan}@amazon.com Ethan.yifanxu@gmail.com, lunadong@fb.com ## Abstract Product catalogs, conceptually in the form of text-rich tables, are self-reported by individual retailers and thus inevitably contain noisy facts. Verifying such textual attributes in product catalogs is essential to improve their reliability. However, popular methods for processing free-text content, such as pre-trained language models, are not particularly effective on structured tabular data since they are typically trained on free-form natural language texts. In this paper, we present Tab-Cleaner, a model designed to handle error detection over text-rich tabular data following a pre-training / fine-tuning paradigm. We train Tab-Cleaner on a real-world Amazon Product Catalog table w.r.t millions of products and show improvements over state-of-the-art methods by 16% on PR AUC over attribute applicability classification task and by 11% on PR AUC over attribute value validation task. ## 1 Introduction Product catalogs are widely used by E-commerce websites to organize product information (Dong et al., 2020). They can be conceptualized as wide tables where each row corresponds to a product and each column corresponds to an attribute (Table 1). Most of the product catalog data are self-reported by individual retailers and thus inevitably contain various types of errors (Dong et al., 2020). It is critical to clean the data to avoid cascading errors harming downstream applications (Pujara et al., 2017; Chu et al., 2016). Due to product catalogs' wide tabular format and rich textual content, attribute cleaning poses a number of challenges as we outline below. C1: Product catalogs are structured tables with unstructured textual values. Errors in product catalogs are indicated by column-wise, rowwise, and table-wise inconsistencies (Table 1). While common pre-trained language models (LMs) (Rajpurkar et al., 2016; Clark et al., 2020; Beltagy et al., 2019; Martin et al., 2019) are effective in processing free texts, they are not suited to capture tabular structures. Although recent works (Yin et al., 2020; Herzig et al., 2020) have adapted Transformers to jointly query tabular and textual data , their goal is information extraction (e.g. answering SQL/free text questions) or tabular structure prediction, rather than error detection. C2: Attributes in product catalogs are strongly correlated with each other. For example, in Table 1, *"Cheddar"* is a valid flavor on its own, but contradicts its ingredient column "Cayenne Pepper, Paprika Extract, Dehydrated Spices". Such correlation renders anomaly detection methods focusing only on value distributions in a single column ineffective. C3: Product catalogs are extraordinarily wide. Considering that some attributes of product catalogs may be text-heavy (e.g., product titles and descriptions are often very long.), a superlong sequence will arise from concatenating all textual attributes of a specific product. Existing table representation models (Yin et al., 2020; Du et al., 2021) restrict input sequence length to a certain budget (e.g., 512) by truncation, which will inevitably cause information loss. To address the above challenges, we present TabCleaner, a transformer model with a hierarchicalattention mechanism and trained with the pretraining / finetuning paradigm to facilitate data cleaning over text-rich tabular data for Ecommerce catalog. Our proposed model is generic. It applies not only to the product domain but also excels in other domains which involve text-rich tabular data. In summary, this paper makes the following contributions. - We propose a tabular structure-aware pre-training / fine-tuning paradigm to enable a Transformerbased model to process text-rich tabular data. - We propose a novel hierarchical attention 172 ![1_image_0.png](1_image_0.png) mechanism to capture attribute-level correlation. - The hierarchical attention also enables a sparse attention pattern to reduce memory consumption and speed-up training, which allows us to cope with long sequences. - We train Tab-Cleaner on a real-world Amazon Product Catalog w.r.t millions of products and show that we can improve over SOTA methods by 16% on PR AUC over attribute applicability classification task and by 11% on PR AUC over attribute value validation task. ## 2 Problem Definition Given a product catalog table T, each row corresponds to a product (pi) and each column corresponds to an attribute (aj ). Cell Tij is the value of attribute j of product i containing a list of tokens. Attributes in product catalog data can be broadly divided into two classes: - **Context attributes** (Acontext), which are usually long texts that describe general information of a product (e.g., title, product description). - **Feature attributes** (Afeature), which are usually short texts that describe a specific attribute about a product (e.g., color, size, flavor, scent). We formally define the problem of data cleaning over the product catalog table as follows: Given: a product catalog table T, Identify: incorrect cells about feature attributes {Tij}pi∈P,aj∈Afeature ## 3 Tab-Cleaner Framework Since manual annotation of error data is costly and labor-intensive to obtain on E-commerce websites, we follow the pre-training/finetuning paradigm to alleviate the need for large-scale labeled data for data cleaning. Tab-Cleaner is first pre-trained on ![1_image_1.png](1_image_1.png) an unlabeled product catalog corpus with tabular structure-aware pre-training objectives carefully designed to capture the tabular structure. Then, Tab-Cleaner fine-tunes the model using manually curated labeled data. ## 3.1 Tab-Cleaner Architecture Transformer-based models cannot be directly applied to tabular data. To flatten each row in the input table into a sequence, we first prepend each attribute value with a [COL] token and its column name, then concatenate them into a flat sequence. For example, Tab-Cleaner flattens R1 in Table 1 as follows. [CLS] [COL] Product title: Brand A Tortilla Chips Spicy Queso,6 - 2 oz bags [COL] Product Category: chips-and-crisps [COL] Flavor: Spicy Queso [COL] Ingredient: Ground Corn, Chipotle Pepper Powder, Paprika Extract, Spices [COL] Size: 6 - 2 oz bags The input structure is designed to capture both attribute and product representations. Attribute (Cell-level) Representation: The first token of every cell is always a special token [COL]. The final hidden state corresponding to token [COL] is used to represent a cell. Product (Row-level) Representation: Each ![2_image_0.png](2_image_0.png) row in the product catalog corresponds to a specific product. The first token of a row (i.e.,[CLS]) is used to represent a product. Tab-Cleaner is implemented by extending DistillBERT's architecture (Sanh et al., 2019) with additional embeddings to capture tabular structure. The detailed architecture is given in Appendix A. ## 3.2 Hierarchical Attention Mechanism Long sequences from concatenated product attributes are challenging for Transformer-based models to process due to quadratic scaling in the full self-attention operation. However, full attention to the entire content is not necessary for modeling structured tables. Based on this insight, we propose a hierarchical attention mechanism that models the tabular structure through a two-level architecture, first encoding all cells on the basis of their tokens (Fig. 1 (a)), then encoding the entire rows on the basis of all their cells (Fig. 1 (b)). In this way, we avoid full attention calculation, thereby greatly reducing the memory and computation in need. The detailed implementation is given in Appendix B. ## 3.2.1 Cell Encoding A cell is the smallest unit to form a table. The list of tokens within each cell expresses semantics independently of the rest content in a row. A local window that covers only the target cell is enough to learn its semantics. Motivated by such observation, our local attention pattern employs a flexible-size window to include only the tokens of the target cell to calculate its representation as shown in Fig. 1 (a). Attributes in a product catalog fall into two classes: (1) feature attributes, which are usually short and can be easily covered by a small window; (2) context attributes, which can contain thousands of tokens. A window with a limited size cannot cover a context attribute. ## Context Attribute Representation Learning A straightforward solution may partition a context attribute into smaller sequences (Fig. 6 (a)). Such partitioning could result in information loss. To address this issue, we propose a novel *local + global* attention to learn the context attribute representation (Fig. 6 (d)). Local Attention Most information about a token can be derived from its surrounding tokens. We define a sliding window attention to capture local information around each token. Given a fixed window size w, each token attends to 1/2w its local neighboring tokens on each side (Fig. 6 (b)). Global Attention Although the local attention shows great effectiveness in capturing local context as demonstrated in Longformer (Beltagy et al., 2020)), it cannot aggregate the global information into the token [COL]. The [COL] has to attend all tokens across the cell to collect the global information. To reduce the computational cost, we propose a "dilated attention" on [COL] where the window has gaps of size dilation d (Fig. 6 (c)). Note that the "dilated attention" operation is symmetric. All tokens attended by [COL] also attend [COL] tokens. Assuming we set dilation d equal to the window size w. Given a sequence with length as L, we can learn [COL] by attending only ceil( √L) tokens. We discussed the expressiveness of local + global attention in Appendix C and showed that it is as expressive as full attention. ## 3.2.2 Row Encoding Attributes of products are usually correlated with each other, which is useful to identify incorrect attribute values. For example, R1 in Table 1 indicates a strong correlation between the ingredient "pepper" and the flavor "spicy". To capture the underlying correlations among attributes, the hierarchical attention mechanism focuses on learning the attention among [COL] tokens (i.e., attribute representation) and [CLS] tokens (i.e., product rep- (a) Original Table $\begin{array}{|l|}\hline\text{Flavor}&\text{Color}\\\hline\text{Spicy}&\text{Quese}\\\hline\text{Green}&\text{Tea}\\\hline\end{array}$ + = $\frac{1}{2}$ (b) Swap Cells on the Same Row Flavor Color R1 Spicy Queso - R5 Green Green Tea (c) Swap Cells on the Same Column Flavor Color R1 Green Tea - R5 Spicy Queso Green ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) Table 2: The different cell corruption strategies. We highlight the swapped attributes in red. resentation) at its second level as shown in Fig. 1 (b). The learned attention cannot only capture the interaction among attributes but also aggregate the entire content of a row into the special token [CLS]. ## 3.2.3 Conditional Encodings Of Feature Attributes Over Context Attributes Context attributes contain useful information about products for verifying the correctness of feature attributes. However, given the hierarchical attention mechanism, feature attributes are learned independently from context attributes. To enable conditional encodings of feature attributes over context attributes, we further improve cell encoding for feature attributes as discussed in Appendix D. ## 3.3 Pre-Training Objectives In order to pre-train Tab-Cleaner using unlabeled product catalog tabular corpus, we adopt the Masked Language Model (MLM) objective for learning token-level representations. In addition, we also propose several different objectives for tabular structure representation learning (e.g., celllevel and row-level representations). Objective for learning token level representations: We apply the standard Masked Language Modeling (MLM) objective to learn token-level representations, with a masking rate of 15%. Since MLM lacks the ability to decompose the tabular structure, we also propose two different objectives for tabular structure representation learning: Objective for learning cell-level representation: Essentially, we corrupt a certain percentage of cells and then learn a classifier to decide if the cell has been corrupted. This objective enables the model to identify incorrect attributes. We use two different corruption strategies to generate corrupted cells as shown in Table 2. - **Swap cells on the same row:** randomly swap two attributes of the same product, e.g. switch the attribute value of color and flavor of R5 to construct corrupted cells (Table 2(b)). - **Swap cells on the same column:** randomly swap an attribute of a product with the same attribute from another product, e.g. switch the flavor attributes of R1 and R5 (Table 2(c)). A binary classifier is placed over the final hidden state corresponding to the token [COL] to decide whether the cell has been corrupted. Objective for learning row-level representation: Each product in the product catalog is associated with a label indicating its category. To learn row-level representation, we apply a multi-class classifier over the final hidden state corresponding to [CLS] token to predict the category of the product. This objective helps the model to understand the entire content of a product. Both objectives for learning cell and row level representation can be modeled using cross-entropy between the one-hot label and the prediction: $${\mathcal{L}}=\sum_{k}y_{k}\log p_{k}\qquad\qquad(1)$$ where yk is the true label and pk is the softmax probability for the k-th class. The final objective function is formulated by combining all three objectives together. ## 3.4 Fine-Tuning The pre-training procedure is followed by the finetuning stage on labeled data. During the fine-tuning stage, we apply Tab-Cleaner to identify two kinds of data errors: - **Inapplicable attribute**, which refers to an attribute that a product should not have. For example, a sippy cup is not edible and thus should not have the attribute "flavor" (R4 in Table 1); - **Incorrect attribute value**, which refers to incorrect value of an attribute. For example, the category of product *"Women's Spa Studio Green* Tea Eye Pads" should be *"eye pad"* instead of "green teas". (R5 in Table 1). To predict the correctness of an attribute, its representation ([COL] embeddings) is fed into a twolayer network with ReLU activations. The output is then used to predict the correctness from a sigmoid layer, training with a binary classification objective. We demonstrate that Tab-Cleaner is effective in detecting both inapplicable attributes and incorrect attribute values in experimental studies. ## 4 Experiments In this section, we evaluate Tab-Cleaner over two different data cleaning downstream tasks on realworld Amazon datasets. ## 4.1 Datasets Datasets for Pre-training We construct two tabular corpora based on the product data obtained from the public Amazon website for pre-training. Due to the different numbers of attributes included, we call these two pre-training tables standard table and wide table. Specifically, the wide table is constructed to investigate how Tab-Cleaner deals with extremely long sequences. Detailed information has been introduced in Appendix E.1 and E.2. Datasets for Fine-tuning To ascertain the performance of Tab-Cleaner, we study two downstream tasks: attribute applicability classification and attribute value validation. Details are provided in Appendix E.2. ## 4.2 Experimental Setup Pre-training & Fine-tuning We train Tab-Cleaner for three epochs for pre-training and 10 epochs for fine-tuning. Detailed settings are in Appendix E.3. Evaluation Metric. Our goal is to identify incorrect attributes of a product, which is a binary classification problem. We adopt the area under the Precision-Recall curve (PR AUC), the area under the Receiver Operating Characteristic Curve (ROC AUC), and Recall at Precision=X (R@P=X) for evaluation. Details about these metrics are given in Appendix E.3. Compared Methods. We evaluate Tab-Cleaner against state-of-the-art (SOTA) algorithms, including (1) DistillBERT (Sanh et al., 2019), since TabCleaner is implemented by extending DistillBERT; (2) Transformer for Longer Sequences (e.g., Longformer (Beltagy et al., 2020)); (3) Nature Language Inference method (NLI). Details about the baseline methods are given in Appendix E.3. We did not include tabular representation models as our baseline because they cannot be applied to our scenario. ## 4.3 Data Cleaning Tasks Attribute Applicability Classification We require each method to predict the applicability of the attribute in the test dataset. Details are provided in Appendix E.4. As presented in Table 3: (1) NLI performs the worst among all methods, indicating the necessity of jointly leveraging all attributes to detect error; (2) Tab-Cleaner consistently outperforms baselines in all cases with significant performance gain (improving SOTA from 0.296 to 0.379 on R@P=0.9). Attribute Value Validation We require each method to validate the correctness of the attribute value in the test dataset. As shown in Table 4, TabCleaner handles both short sequences and long sequences very well. ## 4.4 Scalability To demonstrate the scalability of Tab-Cleaner, we present training time and memory cost for pretraining over the wide table in Table 5. Specifically, we train Tab-Cleaner for three epochs where TabCleaner has 6 layers, a hidden dimensionality of 768, 12 heads, and a batch size of 32. To fairly compare different transformer-based methods, the same setting is employed for all models. Before pre-training, we truncate each row's contents to 512 tokens to make training feasible for DistillBERT. Tab-Cleaner shows the best performance in terms of both pre-training time and memory cost. The superiority of Tab-Cleaner can ascribe to the sparse attention pattern enabled by the hierarchicalattention mechanism. | Methods | Pre-training Time | Memory Cost | |---------------|---------------------|---------------| | (Hours/Epoch) | (MB) | | | DistillBERT | 26.95 | 31263 | | Longformer | 60.56 | 32391 | | Tab-Cleaner | 21.63 | 26501 | ![4_image_0.png](4_image_0.png) Table 5: Training time and memory cost of different methods. ## 4.5 Ablation Study Improvement brought by pre-training TabCleaner follows the pre-training/finetuning paradigm to alleviate the need for large-scale labeled data for data cleaning. To validate the improvement brought by pre-training, we derive a baseline Tab-Cleaner without pre-training and compare it with Tab-Cleaner over attribute applicability classification task as shown in Fig. 3. Tab-Cleaner without pre-training directly fine-tunes over the distilled version of the BERT base model without pre-training over the Amazon product catalog corpus. We observe that the pre-training brings significant performance gain: Tab-Cleaner with pre-training increases the PR ![5_image_0.png](5_image_0.png) Table 3: Results of data cleaning over attribute applicability ![5_image_2.png](5_image_2.png) classification task. The numbers in bold represent the best performance. TabCleaner gives the best performance. AUC of Tab-Cleaner without pre-training from 0.340 to 0.613 and increases R@P=0.9 from 0.101 to 0.379. Impact of different components of TabCleaner Upon the base Tab-Cleaner model, we derive three different variants as follows: - **Tab-Cleaner w/o additional embeddings:** We exclude additional embeddings during learning. - **Tab-Cleaner w/o table structure-aware objective:** We do not employ the pre-training objective for learning tabular substructure representations. - **Tab-Cleaner w/o hierarchical attention:** We do not adopt the hierarchical attention. We compare these three variants with the original Tab-Cleaner framework over the attribute value validation task in Fig. 4. We observe that: (1) The original Tab-Cleaner achieves the best performance, showing the necessity of integrating all three components; (2) Tab-Cleaner w/o hierarchical attention presents the worst performance, indicating the effectiveness of the hierarchical attention mechanism in capturing information from tabular data. ## 4.6 Case Study To further demonstrate the capability of TabCleaner in detecting real-world errors in the Amazon dataset, we present examples of identified errors and missed errors as shown in Table 6. We pre-trained Tab-Cleaner over standard length tab- ![5_image_1.png](5_image_1.png) Table 4: Results of data cleaning over attribute value validation task. The numbers in bold represent the best performance. ![5_image_3.png](5_image_3.png) TabCleaner handles both short sequences and long sequences very well. ular corpus and fine-tuned Tab-Cleaner over two downstream tasks: attribute applicability classification and attribute value validation. Contrary to the common settings, a positive label in an error detection scenario means the data instance is an error while a negative label means the data instance is true. Therefore, the triples with the highest probability have the highest possibility to be incorrect. Threshold σ is chosen based on the best classification accuracies on the validation dataset in order to classify the attributes. Given human labeled incorrect attributes in the test dataset, we present the top 3 attributes with the highest probability as identified errors and the top 3 attributes with the lowest probability as missed errors. We observe that attribute values of identified errors usually violate the description of products and thus can be correctly classified as errors. For example, product 1 in Table 6 is not a skin care product, thus should not have the attribute "skin type". Although the attribute values of products 7, 9, and 11 are commonly observed phrases to describe the target attributes (i.e., "dark" is widely used to describe "skin tone"), their inconsistency with the product description makes them no longer correct attribute values. We also notice that most of the missed errors are correct but labeled as errors due to the wrong annotation. We verify the correctness of all missed errors by ourselves and highlight the correct | Error Type | Identified Errors | Missed Errors | | | | | | | |---------------------------|---------------------------------------------------------------|-------------------------------------------------|---------|-------------------------------------------------------|------------------------------------------------|-------------------------------------------------------|-----------------|-----------| | ID | Product | Attribute | Value | ID | Product | Attribute | Value | | | 1 | Brand A Hand | | | | | | | | | Sanitizer Holder Keychain | skin type | - | 2 | Brand B Isoprophyl Alcohol | scent | - | | | | Inapplicable | 3 | Brand C horse Fly mask Over Fence - Face Covers | flavor | - | 4 | Brand D Velvetines | | | | Attribute | Liquid Matte Lipstick | color | - | | | | | | | 5 | Brand E sock stocking hose sox anklets Women Print Multicolor | age range description | - | 6 | Brand F Coffee, Dulce De Leche Flavored Coffee | container type | - | | | 7 | Brand G ColorStay Overtime Lipcolor Forever Scarlet (040) | skin tone | dark | 8 | Brand H Loose | | | | | Face Powder, Translucent | finish type | matte | | | | | | | | Incorrect | | | | | | | | | | Attribute Value | 9 | Brand I Salad Dressing, Zesty Robusto Italian | variety | garlic | 10 | Brand J Advanced Defence Gum Treatment for Gingivitis | product benefit | cleansing | | Brand K unisex-adult | | | | | | | | | | 11 | Brand J Popcorn | | | | | | | | | Seasoning, White Cheddar | Item form | butter | 12 | Bottle Bright - Hydration Pack Cleaning Tablets Clear | benefit | brightening | | | attributes in red and attributes for which we cannot determine their correctness based on product profiles in blue. We observe that Tab-Cleaner can classify the samples which cannot be correctly classified by humans. This indicates the strong power of Tab-Cleaner in identifying errors. ## 5 Related Work Natural Language Inference (NLI) Data cleaning for product catalog data is related to natural language inference (NLI). Given a premise (e.g., product profiles in our scenario), NLI aims to classify whether the hypothesis (e.g., attribute values in our scenario) is true, false, or undetermined (Tay et al., 2017; Chen et al., 2016). Most of the existing NLI models are based on cross-sentence attention, which can be divided into word-by-word attention-based methods (Rocktäschel et al., 2015; Wang et al., 2017; Wang and Jiang, 2015) and intersentence interaction-based methods (Yin et al., 2018). Existing NLI methods are typically trained on free-form natural language while Tab-Cleaner is designed to handle error detection over text-rich tabular data. Tabular Data Representation Tables are important media of world knowledge (Cafarella et al., 2008). Motivated by the large-scale language models pretrained on tasks involving unstructured natural language, several works attempt to extend the pre-trained language models (LMs) to jointly learn representations of tables as well as text (Yin et al., 2020; Herzig et al., 2020; Zhang et al., 2019) with applications including semantic parsing (Yin et al., 2020; Herzig et al., 2020), entity linking (Deng et al., 2020) and table structure understanding (Nassar et al., 2022; Du et al., 2021; Deng et al., 2020). The training data of these works usually involve thousands of tables, where each table consists of only a few rows and columns. Our proposed method focuses on a different task, text-rich tabular data cleaning, where the training data involve only a single table w.r.t millions of rows. Transformer for Longer Sequences It is challenging for Transformers-based models to process long sequences because their self-attention operation scales quadratically with the sequence length in terms of memory. There have been a number of attempts to alleviate this issue (Dai et al., 2019; Sukhbaatar et al., 2019; Rae et al., 2019; Wang et al., 2019; Joshi et al., 2020; Child et al., 2019), in which Longformer (Beltagy et al., 2020) and Big Bird (Zaheer et al., 2020) are the most representative methods. All these methods focus on tasks involving free-text long content (e.g., document classification, and genomics data analysis), while Tab-Cleaner is designed to cope with a wide table. ## 6 Conclusion We have proposed Tab-Cleaner, a Transformerbased model designed specifically for data-cleaning tasks on text-rich tabular catalog data. It provides a versatile solution for data cleaning tasks by efficiently handling the unique challenges posed by text-rich tabular catalog data. To enhance the efficiency of training and reduce memory consumption, we have introduced a novel hierarchical attention mechanism. This mechanism enables a sparse attention pattern, allowing for the effective processing of long sequences. We train Tab-Cleaner on a real-world Amazon Product Catalog w.r.t millions of products and show that we can improve over SOTA methods greatly. ## Acknowledgements This work was partially supported by NSF 2211557, NSF 2303037, NSF 1937599, NSF 2119643, NASA, SRC, Okawa Foundation Grant, Amazon Research Awards, Amazon Fellowship, Cisco research grant, Picsart Gifts, and Snapchat Gifts. ## References Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Michael J Cafarella, Alon Halevy, Daisy Zhe Wang, Eugene Wu, and Yang Zhang. 2008. Webtables: exploring the power of tables on the web. Proceedings of the VLDB Endowment, 1(1):538–549. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. Xu Chu, Ihab F Ilyas, Sanjay Krishnan, and Jiannan Wang. 2016. Data cleaning: Overview and emerging challenges. In Proceedings of the 2016 international conference on management of data, pages 2201– 2206. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2020. Turl: Table understanding through representation learning. arXiv preprint arXiv:2006.14806. Xin Luna Dong, Xiang He, Andrey Kan, Xian Li, Yan Liang, Jun Ma, Yifan Ethan Xu, Chenwei Zhang, Tong Zhao, Gabriel Blanco Saldana, et al. 2020. Autoknow: Self-driving knowledge collection for products of thousands of types. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2724– 2734. Lun Du, Fei Gao, Xu Chen, Ran Jia, Junshan Wang, Jiang Zhang, Shi Han, and Dongmei Zhang. 2021. Tabularnet: A neural network architecture for understanding semantic structures of tabular data. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 322– 331. Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. arXiv preprint arXiv:2004.02349. Hiroshi Iida, Dung Thai, Varun Manjunatha, and Mohit Iyyer. 2021. Tabbie: Pretrained representations of tabular data. arXiv preprint arXiv:2105.02584. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de La Clergerie, Djamé Seddah, and Benoît Sagot. 2019. Camembert: a tasty french language model. arXiv preprint arXiv:1911.03894. Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, and Peter Staar. 2022. Tableformer: Table structure understanding with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4614–4623. Jay Pujara, Eriq Augustine, and Lise Getoor. 2017. Sparsity and noise: Where knowledge graph embeddings fall short. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1751–1756. Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. 2019. Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kocisk ˇ y, and Phil Blunsom. 2015. ` Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2017. Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. arXiv preprint arXiv:1801.00102. Shuohang Wang and Jing Jiang. 2015. Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814. Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage bert: A globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167. Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. arXiv preprint arXiv:2005.08314. Wenpeng Yin, Hinrich Schütze, and Dan Roth. 2018. End-task oriented textual entailment via deep explorations of inter-sentence interactions. arXiv preprint arXiv:1804.08813. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In NeurIPS. Li Zhang, Shuo Zhang, and Krisztian Balog. 2019. Table2vec: Neural word and entity embeddings for table population and retrieval. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1029–1032. ![9_image_0.png](9_image_0.png) ## A Tab-Cleaner Architecture Tab-Cleaner takes a m × n product catalog table as input and produces token representation and tabular substructure representation (i.e., cell-level representation and row-level representation). Tab-Cleaner is implemented by extending DistillBERT's architecture (Sanh et al., 2019) with additional embeddings that capture tabular structure (Fig. 5). Additional embeddings The token embeddings are combined with additional embeddings used to encode tabular structure before feeding them to the pre-training model: - **Position Embedding** is the relative index of a token within a cell. For example, the position embedding of k-th token in a cell is k. - **Column Embedding** is the index of the column that the token appears in. For example, the column embedding of tokens in cell Tij is j. - **Header/Cell Embedding** indicates if the token corresponds to the column name or the attribute value. It takes two possible values: 0 for the column name and 1 for attribute values. For example, the 4-th cell of R1 in Table 1 consists of a list of tokens {[COL], [Ingredient], [:], [Ground], [Corn], [Chipotle], [Pepper], [Powder], [Paprika], [Extract], [Spices]}, where the header/cell embedding for token [Ingredient] and [:] are 0 and 1 otherwise. For each element xiin the input sequence, we construct its input representation as: $\Large\mathbf{h}_{i}^{0}=\mathbf{x}_{i}^{\text{ele}}+\mathbf{x}_{i}^{\text{pos}}+\mathbf{x}_{i}^{\text{col}}+\mathbf{x}_{i}^{\text{header}}$ (2) $\Large\text{ele}$ is the taken embedding, $\Large\mathbf{x}_{i}^{\text{pos}}$ is the root. where x ele iis the token embedding, x iis the position embedding, x col iis the column embedding, and x header iis the Header/Cell Embedding. After constructing all input representations, we feed them into a stack of L successive Transformer encoders to encode the sequence and obtain: $\mathbf{h}_{i}^{l}=$ Transformer($\mathbf{h}_{i}^{l-1}$) (3) where h l i is the hidden state of xi after the l-th layer. ## B Implementation Of The Transformer Encoder With The Hierarchical Attention Let H = (h1*, . . . ,* hn) denote an input representation, where hiis a d dimensional vector and H is a matrix in Rn×d. We discussed the construction of H in Appendix B. Given the linear projections Q, K, V , the Transformer encoder computes attention scores as follows: Attention($\mathbf{H}_{i}$) = $\sum_{k=1}^{K}\sigma\big{(}Q_{k}(\mathbf{h}_{i})K_{k}(\mathbf{H}_{\mathcal{N}(i)})^{T}\big{)}.V_{k}(\mathbf{H}_{\mathcal{N}(i)})$ * [10] M. C. where N (i) denote the out-neighbors set of node i and HN (i) corresponds to the matrix over {hj : ![10_image_0.png](10_image_0.png) j ∈ N (i)}. K denotes the number of heads. Qk, Kk, Vk are query, key, and value functions. Let the adjacency matrix A define a directed graph G. Each vertex in G corresponds to a token in the input sequence. A ∈ [0, 1]n×n with A(*i, j*) = 1 if query i attends to key j and is zero otherwise. The traditional Transformer encoder calculates full quadratic attention by assuming G is a fully connected graph. Instead, we sparsify G by proposing a hierarchical attention mechanism meanwhile ensure the proposed attentions are as powerful and expressive as full attention. For example, the graphical illustration of the attention pattern shown in Fig. 6 (d) is given in Fig. 7. ## C Expressiveness Of Local + Global Attention For Context Attribute Learning In this section, we discussed the expressiveness of local + global attention and showed that it is as expressive as full attention. The graphical illustration of the attention pattern shown in Fig. 6 (d) is given in Fig. 7. Each node in the graph corresponds to a token in the input sequence. Following the proposed attention pattern, the token can only attend to its directly connected neighbors. Definition 1 **h-hop enclosing graph** *For a h-hop* enclosing graph G = (V, E)*, given any two nodes* x, y ∈ V , we have d(*x, y*) ≤ h. It is obvious that the graph in Fig. 7 is a 3-hop ![10_image_1.png](10_image_1.png) enclosing graph. To generalize such observation, we have the following theorem. Theorem 1 Given an input sequence with length as L, assuming we set dilation equal to the window size d = w = *ceil*( √L), its attention graph is always a 4-hop enclosing graph. We denote the input sequence as x0:L−1 = (x0*, . . . , x*L−1). Given a fixed window size w, each token attends to 1/2w its local neighboring tokens on each side and the global token [COL] (x0) attends tokens (x0, xd*, . . . , x*n∗d) where n = ceil( √L) − 1. Next, we will show that given any node xi ∈ V , we have d(xi, x0) ≤ 2. - If i = k ∗ d, token xi directly connected to the global token [COL]. We have d(xi, x0) = 1. - If i ̸= k ∗ d, token xi attends all tokens within the window x(i−1/2∗d):(i+1/2∗d)(we set w = d). We can always find an integer k which satisfies (i − 1/2 ∗ d) ≤ k ∗ d ≤ (i + 1/2 ∗ d) because (i + 1/2 ∗ d) − (i − 1/2 ∗ d) = d. Therefore, we have d(xi, x0) = 2. Since d(xi, x0) ≤ 2, we have d(xi, xj ) ≤ d(xi, x0) + d(xj , x0) = 4. The attention graph is always a 4-hop enclosing graph. We know that a node in an h-hop enclosing graph is able to collect information from any other node in the graph using an h-layer GNN. Therefore, any token xiis able to aggregate information from all tokens in the sequence using a GNN with over 4 layers. ## D Conditional Encodings Of Feature Attributes Over Context Attributes Note that content attributes contain rich information about products, which is useful for verifying the correctness of feature attributes. For example, the product title "Brand A Tortilla Chips Spicy Queso, 6 - 2 oz bags" covers multiple attributes, including brand, product category, flavor, and size. We can easily verify the correctness of these attributes against the product title. However, given the hierarchical attention mechanism, feature attributes are learned independently from context attributes during the cell encoding stage. Although the correlations between feature attributes and context attributes can be captured afterward during the row encoding stage, the cross-cell token-to-token correlation is lost. To enable cross-cell token-totoken conditional encoding of feature attributes over context attributes, we improve cell encoding for feature attributes as shown in the following example. Example: Given a product with description " Mango Chipotle Origami Wraps *are all natural* sushi wraps made from vegetable and fruit purees. This wrap has a ripe, tropical mango flavor balanced with the **bold spiciness of chipotle pepper**. Origami Wraps are healthy, vegan, gluten-free alternatives to seaweed nori and/or soy paper. They are a creative, flavorful, and colorful new ingredient for restaurant and home chefs alike. Use Mango Chipotle wraps to add some **Latin fusion** flavor *to traditional sushi or to create innovative* sushi-style rolls with many different non-seafood ingredients. Can be used for onigiri, nigiri, and musubi.", only the words highlighted in **boldface** describe target attribute *flavor*. The value of this product on attribute flavor is *"mango"*, which is given in the product catalog. To capture the cross-cell token-to-token correlation between feature attributes and context attributes, the most straightforward way is to concatenate the context attributes and target feature attribute and require each token in the target feature attribute to attend the entire concatenation. Since the concatenation is usually long, attending all tokens in the concatenation is computationally impractical. Note that most contents in context attributes are irrelevant to the target feature attributes. We extract only the relevant information from context attributes to build the concatenation instead. We adopt two different extraction strategies as fol- ![11_image_0.png](11_image_0.png) lows: - Extract the words around the target attribute value. Given the above example, the words highlighted in blue will be extracted to concatenate with *"Flavor: Mango"*. - Extract the words around the most frequently observed textual values for the target attribute. Assuming the most frequent observed textual values for attribute *flavor* include { "fruit", *"spiciness"*, "chocolate", *"vanilla"* }, the words highlighted in red will be extracted to concatenate with the "Flavor: Mango". ## E Experiments E.1 Analysis Of Amazon Data Tab-Cleaner flattens each row in the input table into a sequence of tokens by concatenating all textual attributes. Such a process may raise a super long sequence. To investigate the length of rows in commonly used catalogs in daily business, we randomly sampled web pages from the Amazon website and extract dozens of commonly observed attributes to construct a standard table. We show that long sequences have been widely observed in the standard table with only dozens of attributes. To better understand the performance of Tab-Cleaner over extremely long sequences, we further construct a wide table, which contains hundreds of attributes. Analysis To investigate the length of rows in Amazon data used in daily business, we randomly sampled over millions of products associated with 31 commonly used attributes. To avoid bias, we follow the product categories' frequency distribution (i.e., commonly-occurring product categories are sampled more often than rare product categories) to sample data. The sampled products are cross hundreds of product categories from different domains, such as food, beauty, and drug. After concatenat- ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) Table 7: Pre-training data statistics. ing all attributes of a product into a sequence, the length distribution of products' string encoding is given in Fig. 8. As observed from the figure, even though we include only 31 attributes, over 10% rows have lengths over 512. Note that most existing Transformer models can only handle sequences that fall within the typical 512-token limits, it is necessary to scale Tab-Cleaner to process longer input sequences. ## E.2 Datasets Datasets for Pre-training To prepare the unlabeled product catalog corpus for pre-training, we construct two tables based on randomly sampled web pages from the Amazon website. The first table contains a standard amount of attributes (i.e., 31). We call it a standard table. The second table contains a much larger amount of attributes (i.e., 119). We call it a wide table. The detailed statistics about these two tables are given Table 7. Standard Table We construct the standard table using the data sampled in Section E.1. After flattening the table into a sequence of tokens, the average length of rows in the standard table is 257. Wide Table To better investigate the performance of Tab-Cleaner over extremely long sequences, we construct a wide table that contains hundreds of attributes. To ensure the sufficient length of sampled data, we concatenate all attributes of a product into a sequence and select only products with sequence lengths over 512. The wide table contains 677,744 products associated with 119 attributes. The average length of rows in the wide table is 639. Datasets for Fine-tuning To prepare labeled data for fine-tuning, we asked Amazon Mechanical Turk (MTurk) workers to manually label the correctness of attributes based on product profiles. Each data point is annotated by three Amazon Mechanical Turk workers and the final label is decided by majority voting. In order to ascertain the performance of learned Tab-Cleaner representation over error detection, we study two downstream tasks: attribute applicability classification and attribute value validation. As shown in Table 8, the labeled Table 8: Fine-tuning data statistics data for the attribute applicability classification task covers 53,753 products and 24 feature attributes, and the labeled data for the attribute value validation task covers 9,713 products and 26 feature attributes. For both datasets, 80 percent of the data is used as training data for fine-tuning and the rest is used as validation and test data. Contrary to the common settings, a positive label in an error detection scenario means the data instance is an error while a negative label means the data instance is true. We can observe that both datasets are super unbalanced. Only a few data are labeled as errors (i.e., positive labels). ## E.3 Experimental Setup Pre-training & Fine-tuning Before pre-training, we truncate each row's content to satisfy the maximum sequence length requirement, as some rows contain huge amounts of text. We train Tab-Cleaner for three epochs for pre-training. Tab-Cleaner has 6 layers, a hidden dimensionality of 768, and 12 heads. We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 5e-5. For finetuning, we initialize the parameters with the pretrained model, and further train all parameters with a binary classification objective for 10 epochs. To fairly compare different transformer-based methods, the same setting is employed for all models. We build data for evaluation using the held-out validation/test rows to ensure that there is no overlapping data in training and validation/test. Evaluation Metric. We adopt the area under the Precision-Recall curve (PR AUC), area under the Receiver Operating Characteristic Curve (ROC AUC), and Recall at Precision=X (R@P=X) to evaluate the performance of the models over error detection. To be more specific, PR AUC is defined as the area under the precision-recall curve, which is widely used to evaluate the ranked retrieval results. ROC AUC is a performance measurement for classification problems at various threshold settings, telling how much the model is capable of distinguishing between classes. R@P is defined as the recall value at a given precision, which aims to evaluate the model performance when a specific precision requirement needs to be satisfied. For example, R@R = 0.7 shows the recall when the precision is 0.7. Compared Methods. We evaluate Tab-Cleaner against state-of-the-art (SOTA) algorithms, including (1) DistillBERT (Sanh et al., 2019) since TabCleaner is implemented by extending DistillBERT; (2) Transformer for Longer Sequences (e.g., Longformer (Beltagy et al., 2020)); (3) nature language inference (NLI) methods. The SOTA Transformer for NLI is selected as our baseline. In our setting, the input of the NLI model includes two parts: product profiles (i.e, concatenation of context attributes) and the corresponding feature attribute values. These two sequences are concatenated using a separator token ([SEP]). The first token of input is always set as a special token ([CLS]). We feed the input into Transformers. The final hidden state corresponding to [CLS] is used as the final representation. To predict the correctness of the attribute, a binary classifier is placed over the [CLS] representation for inference. All baseline methods are built within HuggingFace's framework. We did not include tabular representation learning models as our baseline because existing tabular representation learning models focus on different downstream tasks such as table query (i.e., answering either SQL questions or natural language questions given a table) or tabular structure prediction (i.e., predict the data type or tag of a cell). They cannot be applied to clean catalog data. First, they require a different input data format. For example, table query requires paired tables and text (e.g., natural language questions and their answers) and tabular structure prediction requires the tags of cells. Second, they can only deal with tiny tables (e.g., TABBIE (Iida et al., 2021) has to truncate tables to 30 rows and 20 columns). ## E.4 Data Cleaning Tasks Attribute Applicability Classification To evaluate whether the proposed hierarchical attention mechanism is as powerful and expressive as full-attentions, we pre-train each method over a standard-length tabular corpus (standard table), where most of the rows satisfy the maximum sequence length requirement (512) without truncation. A binary classifier is trained to predict the correctness of attributes during fine-tuning. We also pre-train Tab-Cleaner over a longer tabular corpus (wide table), which has contexts significantly longer than 512 tokens.
komma-etal-2023-toward
Toward More Accurate and Generalizable Evaluation Metrics for Task-Oriented Dialogs
https://aclanthology.org/2023.acl-industry.19
Measurement of interaction quality is a critical task for the improvement of large-scale spoken dialog systems. Existing approaches to dialog quality estimation either focus on evaluating the quality of individual turns, or collect dialog-level quality measurements from end users immediately following an interaction. In contrast to these approaches, we introduce a new dialog-level annotation workflow called Dialog Quality Annotation (DQA). DQA expert annotators evaluate the quality of dialogs as a whole, and also label dialogs for attributes such as goal completion and user sentiment. In this contribution, we show that: (i) while dialog quality cannot be completely decomposed into dialog-level attributes, there is a strong relationship between some objective dialog attributes and judgments of dialog quality; (ii) for the task of dialog-level quality estimation, a supervised model trained on dialog-level annotations outperforms methods based purely on aggregating turn-level features; and (iii) the proposed evaluation model shows better domain generalization ability compared to the baselines. On the basis of these results, we argue that having high-quality human-annotated data is an important component of evaluating interaction quality for large industrial-scale voice assistant platforms.
# Toward More Accurate And Generalizable Evaluation Metrics For Task-Oriented Dialogs Abishek Komma, Nagesh Panyam Chandrasekarasastry, Timothy Leffel Anuj Goyal, Angeliki Metallinou, Spyros Matsoukas, **Aram Galstyan** Amazon Alexa AI {kommaak,nagecha,leffelt,anujgoya,ametalli,matsouka,argalsty}@amazon.com ## Abstract Measurement of interaction quality is a critical task for the improvement of spoken dialog systems. Existing approaches to dialog quality estimation either focus on evaluating the quality of individual turns, or collect dialog-level quality measurements from end users immediately following an interaction. In contrast to these approaches, we introduce a new dialoglevel annotation workflow called Dialog Quality Annotation (DQA). DQA expert annotators evaluate the quality of dialogs as a whole, and also label dialogs for attributes such as goal completion and user sentiment. In this contribution, we show that: (i) while dialog quality cannot be completely decomposed into dialoglevel attributes, there is a strong relationship between some objective dialog attributes and judgments of dialog quality; (ii) for the task of dialog-level quality estimation, a supervised model trained on dialog-level annotations outperforms methods based purely on aggregating turn-level features; and (iii) the proposed evaluation model shows better domain generalization ability compared to the baselines. On the basis of these results, we argue that having high-quality human-annotated data is an important component of evaluating interaction quality for large industrial-scale voice assistant platforms. ## 1 **Introduction** Automated measurement of interaction quality is a critical task for the development and improvement of large-scale voice-based AI assistants. There has been a substantial amount of recent work on automated dialog evaluation both for open domain (Ji et al. 2022; Ghazarian et al. 2021; Ghazarian et al. 2022a) and task-oriented (Bordes et al. 2017; Lubis et al. 2022) dialog systems (for recent surveys, see Deriu et al. 2021; Yeh et al. 2021). For taskoriented dialog (TOD) systems such as conversational AI assistants, existing research has largely focused on evaluating the quality of individual turns (Ultes et al. 2014; Schmitt and Ultes 2015; Gupta et al. 2021; a.o.). Estimating interaction quality at the multi-turn dialog level is a relatively less studied topic (though see Walker et al. 1997; Walker et al. 2000; Bodigutla et al. 2020; Deng et al. 2022). Bodigutla et al. (2019a); Bodigutla et al. (2020) showed that user-supplied (binarized) dialog-level satisfaction ratings can be predicted with 70-80% accuracy on a multi-domain dataset, if turn-level quality predictions are included as input features in a supervised model. However, in real-world scenarios, user-satisfaction ratings can be logistically difficult—and frustrating for users—to gather at a sufficiently large scale. Such ratings are also influenced by factors irrelevant to interaction quality itself, e.g. novice and expert users might rate the same dialog very differently; see Bodigutla et al., 2019a for discussion. Turn User utterance **System response** ![0_image_0.png](0_image_0.png) the giants gamethe new york giants are trailing the ![0_image_1.png](0_image_1.png) thoughno problem. did you want a news update 4 sure here is your sports update. the new york... Perhaps more importantly, the factors relevant to whether a single dialog turn is "successful" (or "defective") can be quite different from the factors relevant to whether a multi-turn dialog is successful: even human-to-human linguistic communication frequently involves temporary misunderstandings, clarification, rephrasing, etc.—attributes that are indicative of poor interaction quality *only when* viewed in isolation. For example, consider the (synthetic) dialog in Figure 1: Even though the system response in Turn 1 does not satisfy the user's request, the user quickly clarifies, and immediately 186 receives relevant information. Furthermore, Turn 2's response shows that the user's initial request was based on an incorrect assumption (that a SF Giants game is underway). Despite this, the system provides enough pertinent information to resolve the original request. Viewed as a whole, this is a high-quality dialog. In this contribution, we present a scalable approach to dialog-level quality estimation based on a new annotation scheme we call Dialog Quality Annotation (DQA). DQA adapts and extends Bodigutla et al.'s (2019b) turn-level Response Quality (RQ) annotation task to the dialog level. Whereas Bodigutla et al. (2019b) obtain dialog-level quality labels via directly soliciting user-satisfaction ratings, DQA uses expert annotators to collect groundtruth labels. In line with the results of Bodigutla et al. (2019a), we found that aggregations of turn-level signals are indeed predictive of dialog-level ratings. However, we also found that a supervised approach utilizing both dialog-level signals and aggregated turn-level signals achieves superior performance (F1=.81) compared to aggregation of turn-level features alone (F1=.73; similar to the findings of Bodigutla et al. (2019b) for predicting single-turn ratings). These results have implications for the design of multi-turn interaction quality measurement systems, chief among which is that such systems will achieve superior performance if they include both features computed over entire dialogs and features derived from individual turns of a dialog. Our contributions are summarized as follows: 1. We develop a high-velocity dialog quality annotation (DQA) scheme and use it to generate dialog-level annotations for 3674 dialogs across 11 different domains. 2. We use the annotated data to train a supervised model for predicting binarized dialog-level quality ratings. 3. We conduct experiments and find that our proposed model outperforms baselines in F1 score, and generalizes better to an unseen domain, thus showcasing the value of high-quality dialog-level annotations. ## 2 **Related Work** Existing research on quality metrics for multi-turn human-computer interactions has focused on either task-oriented dialog systems, or open-domain ("chitchat") systems. The present study concerns largely task-oriented use cases, but given the conversational nature of our platform, chitchat also can (and does) occur in dialogs we evaluate. ## 2.1 **Tod Systems** Task-oriented dialog (TOD) systems help humans to achieve concrete tasks via voice or text interaction. For example TOD systems help users book reservations, communicate with customer service systems, or navigate menus. Evaluating the quality of such interactions requires a dataset of TODs annotated with quality scores. A number of TOD datasets have been released publicly (see §4.1 of Sun et al. 2021), but most are designed to evaluate the performance of dialog understanding tasks like Dialog State Tracking, as opposed to the quality of dialogs from the perspective of successful communication. Many such public datasets were created via Wizard-of-Oz experiments, i.e. human-human interactions where one human plays the role of system and the other of user (Eric et al., 2019). Other datasets were collected by first simulating dialog outlines in the form of API sequences and then asking annotators to expand the outlines into natural language dialogs (Rastogi et al., 2020). A recent study annotated TOD datasets with user satisfaction scores by showing dialogs to annotators and asking them to rate for quality (Sun et al., 2021). Various annotation schemas have been proposed to label the quality of TODs at the turn-level. In Interaction Quality (IQ), raters were asked to rate each turn on a 1-5 scale, taking into consideration the dialog quality so far (Schmitt et al., 2012). To reduce the cognitive load on annotators, Bodigutla et al. (2019b) proposed the Response Quality (RQ) annotation schema. RQ removed the constraint to keep track of the dialog quality so far, but asked annotators to consider if the next user utterance might contain feedback, such as frustration, rephrasing, etc. The RQ scale is: 1=Terrible (fails to understand user's goal), 2=Bad (understands goal but fails to satisfy it in any way), 3=OK (partially satisfies goal), 4=Good (mostly satisfies goal), and 5=Excellent (completely satisfies user's goal). Another recent study (Sun et al., 2021) collected annotations at the dialog level, using a simple (underspecified) 5-point user satisfaction scale. Various approaches have been explored to train models to estimate task-oriented dialog quality. Earlier approaches used text-based features from dialogs and trained models like SVMs to predict quality scores. More recent approaches use RNNs (sometimes hierarchical) or BERT to encode dialogs and train models to predict turn- and/or dialog-level quality scores. These approaches model the task either as classification (for discrete quality scores) or regression (for quantitative quality scores). Recent research has explored applications of large language models (LLMs) for dialogbased NLU tasks such as intent recognition and dialog state tracking. Such models have been trained using publicly available TOD datasets, e.g. Wu et al. (2020); Peng et al. (2020); Yang et al. (2021). TOD-based LLMs have not been explored as extensively for the purpose of TOD quality estimation, though this is an active area of research for us. See Deriu et al. 2021 for a survey of approaches to evaluation in TOD systems. ## 2.2 **Open-Domain Dialog Systems** Developing quality metrics for open-domain dialog systems presents different challenges than for TOD systems. In an open-domain dialog, a system can have many relevant responses for a single utterance, and a single dialog could cover multiple unrelated topics. Automated evaluation approaches have explored different aspects of dialog quality such as coherence, informativeness, user engagement (Vakulenko et al., 2018; Zhang et al., 2021; Mehri and Eskénazi, 2020; Ghazarian et al., 2020). Similar to TOD, open-domain dialog evaluation requires high-quality training data. Existing work has used datasets by collecting human judgments (Higashinaka et al., 2014; Cervone and Riccardi, 2020). Another general approach is to use conversations between human users as coherent/positive examples, and then generate negative examples/incoherent dialogs by applying certain perturbations to the coherent dialogues, such as shuffling order or injecting irrelevant utterances into the dialog (Vakulenko et al., 2018; Mesgar et al., 2020; Huang et al., 2020; Zhang et al., 2021). Recent work has considered higher-level semantic perturbations that change the dialog flow more subtly (Ghazarian et al., 2022b). ## 3 **Dialog Quality Annotation** 3.1 **Dqa Workflow** Here we describe the workflow for generating annotations needed to train a supervised dialog quality estimation model. This workflow adapts and extends the related turn-level Response Quality (RQ) workflow of Bodigutla et al. (2019a). We refer to this workflow as "Dialog Quality Annotation" (DQA). DQA is platform- and domain-agnostic, and was designed to support high-velocity annotation. In each DQA task, a multi-turn dialog is presented in its entirety to an expert data annotator (DA). First, the DA is asked to rate the quality of each turn in the dialog. After each turn has been annotated, the DA then answers questions about the dialog as a whole (overall dialog rating, number of goals, goal completion, goal progression, goal friction, system response coherence, and user's inferred sentiment). DAs assigned quality scores to dialogs using a five-point rating scale. About 20% of dialogs are annotated by two DAs, for quality control monitoring. After the workflow was fully productionized and DAs were calibrated on the annotation task, we have observed weekly inter-rater agreement rates ranging from 79% to 86% (with a difference of one scale point allowed). See Appendix A for further details about the workflow. Using the DQA workflow, we gathered a dataset of 3569 annotated dialogs (9347 turns from 3233 unique users), of which 714 were held out as a test set to evaluate the performance of baseline methods and trained models. The remaining 2855 annotated dialogs were used to train candidate dialog-level defect detection models. This data was gathered by randomly sampling (de-identified) interactions across 10 different experiences supported by our platform. Our train-test split was stratified by experience, so that each use case appears at a similar rate across train and test sets. Finally, we gathered 105 additional annotated dialogs (502 total turns) from a use case that does not appear in the training or test data (Shopping product Q&A). These out-of-distribution (OOD) dialogs enable us to more realistically assess how well the resulting model generalizes to patterns unseen during training. The majority of the data we gathered were from experiences in which the system only has access to information about the target use case. Such traffic is partitioned into discrete user sessions by default, so we considered a "dialog" to just be a single user session. For the OOD traffic, which does not come with pre-defined session boundaries, we used a time-based heuristic where a dialog is considered to be a sequence of utterances from a single user, with no more than 180 seconds of inactivity between turns. In future work, we are exploring modelbased methods for dialog segmentation. ## 3.2 **Dialog Quality Versus Dialog Attributes** As discussed above, for every dialog, the DAs provide the overall dialog-level rating, salient attributes of the dialog, and the individual turn-level ratings. With these annotations we aim to understand the relationship between salient attributes of a dialog (e.g. goal progression, goal completion, response coherence) and the overall dialog-level ratings. The motivation here is that a robust relationship between objective dialog attributes and dialog ratings would help us to derive human-quality labels from automated methods in the future. While some research exists on the relationship between turn-level and dialog-level quality ratings (Bodigutla et al. 2020), few studies explore the relationship between dialog-level attributes and dialoglevel quality ratings (Siro et al. 2022). In Figure 2 we plot the distribution of dialoglevel rating against four salient attributes of the dialog. As expected we can clearly see that dialogs received higher ratings when users successfully completed their goals, system responses were coherent, and users encountered less friction while progressing towards their goals. Further, Table 1 computes the Spearman's ρ correlation between the ratings and attributes. Goal completion was found to have the highest correlation score of .859, while user sentiment had the lowest, at .449. Moreover, user friction encountered had a negative correlation to dialog rating. These observations are intuitive given the dialogs were sampled from mostly taskoriented experiences. ![3_image_1.png](3_image_1.png) ## 4 **Dialog Quality Estimation Model** We now describe our dialog quality estimation model (DQM), which leverages the dialog-level annotations described in the previous section. Table 1: Correlation of dialog rating with salient attributes of the dialog. All correlations in this table are statistically significant at p < 0.01. ![3_image_0.png](3_image_0.png) Figure 3 illustrates the architecture of the model. We first leverage a pre-trained turn-level defect detection model (which is trained on millions of interactions) as a feature extractor using a RoBERTa-IQbased framework (Gupta et al., 2021). We encode each turn of a dialog as a dense vector. We use a max-pooling operation on the turn-level vectors to obtain a dialog-level representation. Finally, we concatenate this with a bag-of-words representation (TF-IDF over unigrams) of the dialog text. This final dialog-level vector is then fed into a Random Forest Classifier to learn a mapping from dialoglevel representation to the binarized defect label in {0, 1}. We arrived at this setup after experimenting with various text and numeric features, and simpler classification algorithms. We also found more sophisticated models to be less effective with our current dataset, although we plan to revisit more complex architectures as the data grows. Experimental results comparing the performance of this model against several strong baselines are presented in §5-6. The pre-trained text encoder used in our model is based upon an internal model that produces turnlevel defect (TLD) scores, which are real-valued scores in [0, 1] that can be interpreted as the probability that a given turn is defective from the perspective of the user (see Gupta et al., 2021 for details on the model). TLD scores are derived from a RoBERTa-IQ classifier trained to detect defective turns within a dialog. Although the TLD model does take context into account when scoring interactions, it is explicitly designed to score dialog turns, as opposed to entire dialogs. Our primary question was therefore whether a model trained on the task of dialog-level defect detection outperforms methods that only involve aggregation of turn-level signals. The relevant tradeoff here is that aggregations of TLD scores are cheap and easy to compute, but may suffer from poor accuracy since they were not designed to make predictions about dialogs as a whole. We hypothesized that a dialog-level statistical model would outperform the TLD-based baselines, in large part because of observed interaction patterns in which the quality of a dialog is not a straightforward combination of the quality of its constituent turns. ## 5 **Experiment Setup** For the purposes of these experiments, we binarized the five-point dialog-level quality labels by assigning dialogs rated 1, 2, or 3 to the defect class, and dialogs rated 4 or 5 to the non-defect class. This follows the approach taken by Gupta et al. (2021) for turn-level response quality prediction, enabling us to frame defect detection as a binary classification task. We assessed the quality of each estimator by measuring its precision, recall, and F1 score relative to human labels on the held-out test set. We computed four baseline dialog-quality scores, all of which were derived by aggregating TLD scores across each turn in a dialog. We expected to see a very strong relationship between average TLD and dialog quality score, especially since the TLD model uses information from surrounding turns as features. These are the baseline methods we computed over the test data used for model evaluation. Each score reduces a sequence of turn-level scores from a dialog into a single value, which represents the dialog-level score. 1. *Mean TLD*: Simple arithmetic mean of the predicted turn-level TLD model scores. 2. *Last-turn TLD score*: Interpret the final turn's TLD score as the dialog-level score. The idea is that recency bias will lead the final turn to have more impact than others in perceived dialog quality. 3. *Union of mean and last-turn TLD*: A dialog is considered defective if either the mean or last-turn TLD score exceeds some threshold (here: 0.5). 4. *Rising linear weights*: Calculate mean TLD score with each turn linearly weighted by its index, so that later turns have higher weights. Baseline methods required no training process at all, as they consist of arithmetic aggregations of TLD scores, which were already available prior to experimentation. To prepare each dialog for baseline evaluation, we simply computed each aggregation for each dialog. Baseline aggregations were then converted to binary predictions via a threshold: dialogs with scores ≥ .5 are considered defective; scores < .5 are considered non-defective (we found that some use cases achieve higher accuracy with higher thresholds, while others benefit from lower thresholds; here we use the fixed value of .5, as we intend for these methods to be applicable to any supported use case). We scored each dialog in the 714-dialog test set and the 105-dialog OOD test set for each baseline method, and computed performance metrics of interest relative to the human annotations. To optimize hyperparameters and perform feature selection for our candidate dialog-level defect detection model, we used five-fold cross validation over the training set, selecting the fit that maximized (mean) F1 over the set of hyperparameters and feature subsets considered. The resulting configuration was then trained against the entire 2855dialog training set. We then used the resulting model to predict defect class (and class probability) over both test sets, computing and recording the same performance metrics of interest. 6 **Results** We present the experimental results of the baseline methods and our supervised model for dialog level defect detection (DQM) in Table 2. We describe our observations and inferences from this comparative study in the following section. ## 6.1 **Performance Of Baselines And Dqm** We observed the following regarding the performance of TLD-based baselines and DQM: 1. *Among the TLD-based baselines, the Union* of Mean & Last Turn TLD performs best in all scenarios. However, in absolute terms, the best baseline is not the best performing method for evaluating dialog quality, and only achieves F1 scores of .77 and .51 compared to human annotation on | Multi-domain test set (n = 714) | OOD test set (n = 105) | | | | | | |-----------------------------------|--------------------------|----------|-----------|--------|----------|-----| | Precision | Recall | F1-Score | Precision | Recall | F1-score | | | Mean TLD | .84 | .54 | .66 | .39 | .77 | .52 | | Last-turn TLD | .83 | .68 | .75 | .47 | .23 | .31 | | Union of mean & last-turn TLD | .82 | .73 | .77 | .38 | .77 | .51 | | Rising linear weights | .83 | .63 | .72 | .41 | .67 | .51 | | DQM | .78 | .83 | .81 | .48 | .80 | .60 | the multi-domain and OOD data, respectively. 2. *DQM outperforms the best TLD-based baseline in F1 by 4 and 9 percentage points on the multidomain and OOD test sets, respectively*. Note that the OOD (Shopping) use case was unseen during training, yet the model achieves an out-of-the-box F1 score of .60 in detecting defective OOD dialogs, compared to only .51 for the best baseline. 3. *DQM has a large advantage in recall over* baselines, albeit at the cost of reduced precision. ## 6.2 **Error Analysis** We further analyzed the performance of DQM and baseline methods over the test set, splitting the data by various attributes of interest. We made the following inferences on the basis of these analyses: 1. Performance of TLD-based baselines and DQM as a function of dialog length indicates that the gap widens as dialog length increases. Baselines perform better for shorter dialogs (≤ 3 turns) and start to drop in performance as dialog length increases, while DQM's performance improves as dialog length increases. This observation likely explains part of the gap between DQM and baselines on OOD data, since these dialogs tend to be much longer than in our multi-domain dataset (mean of 4.78 turns per dialog versus 2.62). Table 3 shows baseline versus DQM performance over the multidomain test set, split by dialog length. 2. DQM has an advantage in detecting defective dialogs that contain a small number of fatal turns, early on or in the middle of the dialog, which create an overall defective experience. In contrast, TLD-based baselines like mean TLD weight each turn equally and often miss such dialogs. See Appendix B.1 for further discussion of this pattern. 3. Both TLD-based baselines and DQM struggle to differentiate between user query rephrasing, which is typically a defect, and user query refinement, which is typically not a defect (see Appendix B.2 for examples). User rephrasing happens when a user request is not successful and the user repeats their request with a slightly different surface form. User refinement occurs when a user iteratively refines a successful search by adding or modifying constraints. We observe that TLD-based baselines have a bias towards incorrectly predicting refinements as defects, possibly because it misclassifies them as rephrases. DQM also struggles with this since it uses TLD as input signal. We hypothesize that these biases may be easier to correct by retraining DQM with targeted multi-turn data than by retraining the TLD model, which is primarily trained on single- or few-turn interaction patterns. Table 3: Performance (ROC-AUC) of TLD-based baselines and supervised model against dialog length on Multi-domain test set (n = 714). TLD-U is union of mean and last-turn TLD (the best baseline). | Dialog Length | n | TLD-U | DQM | |--------------------|-----|---------|-------| | Short (≤ 3 turns) | 535 | .76 | .79 | | Medium (4-6 turns) | 149 | .73 | .80 | | Long (≥ 7 turns) | 30 | .69 | .84 | ## 7 **Conclusion** In this study, we presented a new dialog-level annotation workflow DQA, which enables high-velocity labelling of multi-turn human-computer interactions. Our approach is similar to Bodigutla et al. (2020), but differs in that we gather labels from expert annotators instead of end users themselves. We showed that a supervised model trained on DQA annotations outperforms several strong baselines based on aggregating turn-level defect scores. Furthermore, we observed that the model generalizes better to a previously unseen domain. We also found several qualitative patterns of interest, most notably that DQM's advantage over baselines expands as dialog length increases. These findings jointly lend support for an annotation-based approach to estimating multi-turn interaction quality for large-scale dialog systems. ## Limitations Our proposed approach is designed explicitly for evaluation of task-oriented dialog systems, and is hence unlikely to generalize well to chitchat systems. Most traffic to our platform (and our annotation workflows, including DQA) comes in the form of task-oriented interactions. User turns in the traffic we analyze tend to be quite short (usually less than 20 tokens) and direct, so our model is unlikely to perform as well on dialogs driven by long-form user utterances. ## Ethical Considerations We do not envision any ethical concerns with the research presented here. No customer data is released or presented in this paper, and even our internal data sources are fully de-identified and contain no customer Personal Identifiable Information (PII). ## Acknowledgments We wish to thank: Di Wang and Wenbo Yan of Alexa Shopping for providing the OOD test data; the Amazon Data Services team for their work producing annotations; and the metrics team in Alexa for developing the turn-level model that forms the backbone of the dialog-level model presented here. And thanks to the anonymous reviewers, whose feedback helped to clarify and improve this paper. References Praveen Kumar Bodigutla, Lazaros Polymenakos, and Spyros Matsoukas. 2019a. Multi-domain conversation quality evaluation via user satisfaction estimation. 3rd Workshop on Conversation AI: Today's Practice and Tomorrow's Potential, 33rd Conference on Neural Information Processing Systems. Praveen Kumar Bodigutla, Aditya Tiwari, Josep VallsVargas, Lazaros Polymenakos, and Spyros Matsoukas. 2020. Joint turn and dialogue level user satisfaction estimation on multi-domain conversations. In *EMNLP 2020*. Praveen Kumar Bodigutla, Longshaokan Wang, Kate Ridgeway, Joshua Levy, Swanand Joshi, Alborz Geramifard, and Spyros Matsoukas. 2019b. Domainindependent turn-level dialogue quality evaluation via user satisfaction estimation. *arXiv preprint* arXiv:1908.07064. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations. Alessandra Cervone and Giuseppe Riccardi. 2020. Is this dialogue coherent? Learning from dialogue acts and entities. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2020, pages 162–174. Association for Computational Linguistics. Yang Deng, Wenxuan Zhang, Wai Lam, Hong Cheng, and Helen Meng. 2022. User satisfaction estimation with sequential dialogue act modeling in goaloriented conversational systems. In *Proceedings of* the ACM Web Conference 2022, pages 2998–3008. Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2021. Survey on evaluation methods for dialogue systems. *Artificial Intelligence Review*, 54(1):755–810. Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, and Dilek Hakkani-Tur. 2019. MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. *arXiv preprint arXiv:1907.01669*. Sarik Ghazarian, Behnam Hedayatnia, Alexandros Papangelis, Yang Liu, and Dilek Hakkani-Tur. 2021. User response and sentiment prediction for automatic dialogue evaluation. arXiv preprint arXiv:2111.08808. Sarik Ghazarian, Behnam Hedayatnia, Alexandros Papangelis, Yang Liu, and Dilek Hakkani-Tur. 2022a. What is wrong with you?: Leveraging user sentiment for automatic dialog evaluation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 4194–4204, Dublin, Ireland. Association for Computational Linguistics. Sarik Ghazarian, Ralph Weischedel, Aram Galstyan, and Nanyun Peng. 2020. Predictive engagement: An efficient metric for automatic evaluation of opendomain dialogue systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34.05, pages 7789–7796. Sarik Ghazarian, Nuan Wen, Aram Galstyan, and Nanyun Peng. 2022b. DEAM: Dialogue coherence evaluation using AMR-based semantic manipulations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 771–785, Dublin, Ireland. Association for Computational Linguistics. Saurabh Gupta, Xing Fan, Derek Liu, Benjamin Yao, Yuan Ling, Kun Zhou, Tuan-Hung Pham, and Edward Guo. 2021. RoBERTaIQ: An efficient framework for automatic interaction quality estimation of dialogue systems. In *Proceedings of DeMaL, Second International Workshop on Data-Efficient Machine Learning* (KDD 2021). Ryuichiro Higashinaka, Toyomi Meguro, Kenji Imamura, Hiroaki Sugiyama, Toshiro Makino, and Yoshihiro Matsuo. 2014. Evaluating coherence in open domain conversational systems. In *15th Annual Conference of the International Speech Communication Association (INTERSPEECH)*, pages 130–134. ISCA. Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. GRADE: automatic graphenhanced coherence metric for evaluating opendomain dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9230–9240. Association for Computational Linguistics. Tianbo Ji, Yvette Graham, Gareth Jones, Chenyang Lyu, and Qun Liu. 2022. Achieving reliable human assessment of open-domain dialogue systems. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics (Volume 1: Long Papers), pages 6416–6437, Dublin, Ireland. Association for Computational Linguistics. Nurul Lubis, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Michael Heck, Shutong Feng, and Milica Gašic. 2022. Dialogue evaluation ´ with offline reinforcement learning. arXiv preprint arXiv:2209.00876. Shikib Mehri and Maxine Eskénazi. 2020. Unsupervised evaluation of interactive dialog with DialoGPT. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGdial 2020, pages 225–235. Association for Computational Linguistics. Mohsen Mesgar, Sebastian Bücker, and Iryna Gurevych. 2020. Dialogue coherence assessment without explicit dialogue act labels. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1439–1450. Association for Computational Linguistics. Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv preprint arXiv:2005.05298. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Schemaguided dialogue state tracking task at dstc8. *arXiv* preprint arXiv:2002.01359. Alexander Schmitt and Stefan Ultes. 2015. Interaction quality: assessing the quality of ongoing spoken dialog interaction by experts—and how it relates to user satisfaction. *Speech Communication*, 74:12–36. Alexander Schmitt, Stefan Ultes, and Wolfgang Minker. 2012. A parameterized and annotated spoken dialog corpus of the CMU Let's Go bus information system. In *Proceedings of the Eighth International* Conference on Language Resources and Evaluation (LREC'12), pages 3369–3373. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In *North American Chapter of the Association for* Computational Linguistics (NAACL). Clemencia Siro, Mohammad Aliannejadi, and Maarten de Rijke. 2022. Understanding user satisfaction with task-oriented dialogue systems. In *Proceedings of* the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Weiwei Sun, Shuo Zhang, Krisztian Balog, Zhaochun Ren, Pengjie Ren, Zhumin Chen, and Maarten de Rijke. 2021. Simulating user satisfaction for the evaluation of task-oriented dialogue systems. In *Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information* Retrieval, pages 2499–2506. Stefan Ultes, Robert ElChab, and Wolfgang Minker. 2014. Application and evaluation of a conditioned hidden markov model for estimating interaction quality of spoken dialogue systems. In Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken Dialog Systems into Practice, pages 303–312. Springer. Svitlana Vakulenko, Maarten de Rijke, Michael Cochez, Vadim Savenkov, and Axel Polleres. 2018. Measuring semantic coherence of a conversation. In *International Semantic Web Conference*, pages 634–651. Marilyn Walker, Candace Kamm, and Diane Litman. 2000. Towards developing general models of usability with PARADISE. *Natural Language Engineering*, 6(3-4):363–377. Marilyn Walker, Diane Litman, Candace Kamm, and Alicia Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In *Proceedings of* the 35th Annual Meeting of the ACL and Eighth Conference of the European Chapter of the Association for Computational Linguistics, page 271–280. Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong. 2020. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. arXiv preprint arXiv:2004.06871. Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021. Ubar: Towards fully end-to-end task-oriented dialog system with GPT-2. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35.16, pages 14230– 14238. Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. *arXiv preprint arXiv:2106.03706*. Chen Zhang, Yiming Chen, Luis Fernando D'Haro, Yan Zhang, Thomas Friedrichs, Grandee Lee, and Haizhou Li. 2021. DynaEval: Unifying turn and dialogue level evaluation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 5676– 5689. Association for Computational Linguistics. ## A **Dialog Quality Annotation Workflow** Design Here are some selected questions for collecting human annotations used in the DQA workflow. The design of this workflow was inspired by See et al. (2019). In each annotation task, a multi-turn dialog is presented to the data annotator (DA) in its entirety. The dialog consists of a sequence of turns. Each turn consists of a User request and a System response. Turn Level: First, the DA is asked to rate every turn in the dialog. Provide an overall rating for the System's response in the current turn - 1-Terrible - 2-Bad - 3-Ok - 4-Good - 5-Excellent Dialog Level: Next the DA is asked to answer a series of dialog-level questions to capture the overall rating along with some salient attributes of the dialog. [User Satisfaction] Rate the overall user satisfaction based on their interaction in the dialog - 1-Very Dissatisfied - 2-Dissatisfied - 3-Normal - 4-Satisfied - 5-Very Satisfied [Goal Completion] How many goals are in the dialog? - Zero - One - Many [Goal Progression] Did the user make progress towards achieving their goals? - No Progress - Some Progress - Full Progress [Goal Completion] How many goals did the user complete in the dialog? - None Completed - Some Completed - All Completed [Goal Friction] Did the user encounter friction trying to achieve their goals in the dialog? - Lots of Friction - Some Friction - No Friction [Coherence] How often did the System say something which did NOT make sense? - Never Made Sense - Some Made Sense - All Made Sense [Sentiment] Describe the user's sentiment in the conversation with the System - Negative - Neutral - Positive ## B **Dialog Patterns** In this section, we compare the behavior of the baselines methods and DQM model predictions on specific customer interaction patterns found in multi-turn conversations. Note that the dialog samples in this appendix are synthetic examples fabricated to illustrate common use cases supported by our platform. ## B.1 **Fatal Turns In A Dialog** Sometimes a dialog can be considered defective based on the interaction in a single turn. We refer to such turns as "fatal turns" (See Turn 2 in Figure 4). This example illustrates that baseline methods which aggregate turn-level information do not adequately capture the non-linear nature of the overall customer satisfaction in a multi-turn dialog. In contrast, we observed that DQM has a higher chance of detecting defective dialogs with fatal turns. ![9_image_2.png](9_image_2.png) ![9_image_3.png](9_image_3.png) ## B.2 **User Rephrase** Another common dialog pattern is a user-rephrase (See Figure 5). Here the user simply repeats or slightly rephrases their initial request hoping for a better system response. User rephrases in most cases indicate customer friction and often get poor dialog-level ratings. Since the TLD model we use for encoding turns (see § 4) considers context around the current turn, it learns a strong association between user rephrases and defective turns. ## B.3 **User Refinement** In use cases involving topic exploration or navigation of recommendations, a user progressively adds more constraints to refine their earlier requests (see Figure 6). Unlike the user rephrase pattern described in Section B.2, the previous turns leading to ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) more refinement do not necessarily indicate a unsatisfactory customer experience. A well performing dialog-quality model should learn to distinguish between frictional rephrases and non-frictional refinements. We note that TLD-based baselines have a bias towards incorrectly predicting refinements as defects. As DQM uses TLD as an input signal, DQM too struggles to effectively distinguish between frictional rephrase and non-frictional refinement. ![9_image_4.png](9_image_4.png) ![9_image_5.png](9_image_5.png)
liu-etal-2023-tab
Tab-{CQA}: A Tabular Conversational Question Answering Dataset on Financial Reports
https://aclanthology.org/2023.acl-industry.20
Existing conversational question answering (CQA) datasets have been usually constructed from unstructured texts in English. In this paper, we propose Tab-CQA, a tabular CQA dataset created from Chinese financial reports that are extracted from listed companies in a wide range of different sectors in the past 30 years. From these reports, we select 2,463 tables, and manually generate 2,463 conversations with 35,494 QA pairs. Additionally, we select 4,578 tables, from which 4,578 conversations with 73,595 QA pairs are automatically created via a template-based method. With the manually- and automatically-generated conversations, Tab-CQA contains answerable and unanswerable questions. For the answerable questions, we further diversify them to cover a wide range of skills, e.g., table retrieval, fact checking, numerical reasoning, so as to accommodate real-world scenarios. We further propose two different tabular CQA models, a text-based model and an operation-based model, and evaluate them on Tab-CQA. Experiment results show that Tab-CQA is a very challenging dataset, where a huge performance gap exists between human and neural models. We will publicly release Tab-CQA as a benchmark testbed to promote further research on Chinese tabular CQA.
## Tab-Cqa: A Tabular Conversational Question Answering Dataset On Financial Reports Chuang Liu1**, Junzhuo Li**2, and **Deyi Xiong**1,2 ∗ 1 College of Intelligence and Computing, Tianjin University, Tianjin, China 2 School of New Media and Communication, Tianjin University, Tianjin, China {liuc_09,jzli,dyxiong}@tju.edu.cn ## Abstract Existing conversational question answering (CQA) datasets have been usually constructed from unstructured texts in English. In this paper, we propose Tab-CQA, a tabular CQA dataset created from Chinese financial reports that are extracted from listed companies in a wide range of different sectors in the past 30 years. From these reports, we select 2,463 tables, and manually generate 2,463 conversations with 35,494 QA pairs. Additionally, we select 4,578 tables, from which 4,578 conversations with 73,595 QA pairs are automatically created via a template-based method. With the manually- and automatically-generated conversations, Tab-CQA contains answerable and unanswerable questions. For the answerable questions, we further diversify them to cover a wide range of skills, e.g., table retrieval, fact checking, numerical reasoning, so as to accommodate real-world scenarios. We further propose two different tabular CQA models, a text-based model and an operation-based model, and evaluate them on Tab-CQA. Experiment results show that Tab-CQA is a very challenging dataset, where a huge performance gap exists between human and neural models. In order to promote further research on Chinese tabular CQA, we release the dataset as a benchmark testbed at https://github.com/ tjunlp-lab/Tab-CQA. ## 1 Introduction Conversational question answering (CQA) extends traditional question answering to a conversational scenario where questions and answers are usually related to conversation history. Recent years have witnessed an upsurge of interest in both dataset building for CQA and models/approaches to CQA. Previous CQA datasets have been constructed either for text span extracting tasks (Choi et al., 2018; Reddy et al., 2019; Campos et al., 2020; Saeidi et al., 2018) or SQL-style table search tasks (Iyyer et al., 2017; Yu et al., 2019b). All these datasets are in English. Public CQA datasets in other languages are either rare or not available at all, which is of course not desirable for an inclusive CQA study across different classes of languages (Joshi et al., 2020). In addition to the language dimension in diversifying CQA tasks and datasets, data source is yet another important factor. Existing CQA datasets are usually constructed from unstructured texts. Structured or semi-structured data, like tables, are also important sources for information gathering. Furthermore, structured tables exhibit reasoning skills (e.g., more numeric reasoning) with a different distribution from those on unstructured texts. Inspired by the aforementioned diversity in both languages and data sources for CQA, in this paper, we propose Tab-CQA, a large-scale tabular conversational question answering dataset built from Chinese financial reports. It contains 2,463 manuallygenerated conversations and 4,578 automaticallygenerated conversations, with a total of 109,089 question-answer pairs. Each dialogue consists of multiple rounds of questions and answers, which are either manually created by crowdsourced workers playing roles of students and teachers or automatically created by a template-based method. For manually-created dialogues, the student reads a partially masked table and asks a series of questions. We train students to ask questions as naturally as possible, preserving the characteristics of natural conversations, such as ellipsis and coreference in dialogue. The teacher provides answers to the questions from the student by carefully checking the given completely visible table. In addition to question-answer pair collection in a conversational way, we also provide annotations to manually created dialogues. The teacher annotates an answer type for each provided answer. Based on pre-annotation and analysis on financial ∗Corresponding author. Dataset # Conversations # Questions # **Avg.Turns Tabular Natural language questions Numerical reasoning Chinese** CQA 8,399 127,000 15.2 X ✓ X X QuAC 13,594 98,407 7.2 X ✓ X X DoQA 2,437 10,917 4.48 X ✓ X X SQA 6,066 17,553 2.9 ✓ ✓ ✓ X CoSQL 2,164 15,598 5.2 ✓ X X X HybridQA - 70,153 - ✓ ✓ ✓ X OTT-QA - 45,841 - ✓ ✓ ✓ X TAT-QA - 16, 552 - ✓ ✓ ✓ X FinQA - 8,281 - ✓ ✓ ✓ X DROP - 96,567 - X ✓ ✓ X CMRC2017 - 364,295 - X ✓ X ✓ CMRC2018 - 19,071 - X ✓ X ✓ DuReader - 200,000 - X ✓ X ✓ Tab-CQA 7,041 **109,089 14.6** ✓ ✓ ✓ ✓ tables, we divide answer types into three categories: table retrieval, fact checking and computation. In addition to the answer type annotation, the teacher also needs to provide conversation flow tags to control the flow of conversation, *i.e., good, ok, unallowable*. A tabular conversation question answering example from Tab-CQA is in the Appendix A. All questions in Tab-CQA require reasoning across the given table and dialogue history. Even for table retrieval questions, they are created in the way that is more difficult than just span extraction. The dataset also contains unanswerable questions. In addition to manually created QA pairs, we automatically generate 73,595 QA pairs based on predefined templates over tables. Unlike manually labeled QA pairs, automatically-generated pairs are tagged with special labels. We then propose two different CQA models: a text-based model and an operation-based model. The text-based model is to convert the table into a passage by a multi-type network for different types of answers in Tab-CQA. The operation-based model converts the table into triplets to facilitate numeric reasoning. The contributions of the work are as follows. - We propose Tab-CQA to diversify existing CQA datasets in language, data source and task definition. To the best of our knowledge, Tab-CQA is the first conversational question answering dataset in Chinese. And unlike other Chinese QA datasets, it focuses on understanding tables in financial reports. - We introduce a method to build tabular conversational QA datasets, where students cannot see entire tables and ask questions that require high cognitive skills to answer, e.g., logical and numerical reasoning skill. - We use Tab-CQA as a bechmark dataset to test two different methods depending on the form of table representation and provide a systematic error analysis for the best method. The dataset will be publicly available soon. ## 2 Related Work Tab-CQA is related to datasets for text-based conversational question answering and tabular question answering. It is also partially related to datasets on numerical reasoning and machine reading comprehension in Chinese. The comparison of Tab-CQA to other related datasets is shown in Table 1. ## Text-Based Conversational Question Answering datasets. Reddy et al. (2019) propose a CQA dataset CoQA. It contains 8K conversations with 127K question-answer pairs. The dataset selects passages from several domains, such as children's stories, news, and science. Answers of CoQA are mostly a short fragment or an entity. Passages are visible to both questioners and responders in CoQA. Choi et al. (2018) present a CQA dataset QuAC that focuses mainly on information seeking. It contains 14K conversations on passages selected from Wikipedia. Unlike CoQA, only the responder can see complete passages in QuAC while questioner can only see the titles of passages. Due to this setting, QuAC may contain unanswerable questions. Hence, annotators not only annotate the answerability of questions, but also provide dialogue actions to control the flow of dialogue. Paritially inspired by this, we present only header rows/columns of extracted tables to the student in building our dataset. Campos et al. (2020) build a domain-specific CQA dataset DoQA. The dataset contains 2.4K conversations, with 10.9K question-answer pairs. DoQA also includes question-answer pairs in the information retrieval scenarios. All these CQA datasets are different from TabCQA in that they create QA-style conversations on unstructured texts written in English. Tabular and database-based question answering datasets. Database queries are often relatively complex. Hence, Iyyer et al. (2017) propose a SQL-style CQA dataset SQA to decompose complex queries into several simple questions, so that there is a contextual relationship between them. Their dataset uses Wikipedia tables to create 6K sequences of questions, where complex questions are decomposed into several simple questions. Yu et al. (2019a) create a cross-domain corpus based on a conversational query system CoSQL. It collects 3K+ conversations from 200 databases covering 138 domains, containing 30K+ rounds of conversations and 10K+ annotated SQL queries. CoSQL is significantly different from ours in that it collects SQL-style queries rather than natural language questions. Chen et al. (2020c) build a hybrid text- and tablebased QA dataset HybridQA. Each question is aligned to a structured Wikipedia table and entities in the table are linked to free texts. The dataset contains 70K question-answer pairs and 13K tables, each of which is associated with an average of 44 paragraphs. Chen et al. (2020a) present an open domain QA dataset OTT-QA built on the base of HybridQA. They re-annotate 45K questions, which require multi-step reasoning, aggregating information from tables and texts. Zhu et al. (2021) and Chen et al. (2021) propose hybrid QA datasets, also focusing on answering questions over financial data. The two datasets contain 16,552 and 8,281 question-answer pairs, respectively. Despite the similarity to Tab-CQA in financial QA, the two datasets are in English and not in a conversational format. Yet another dataset related to Tab-CQA is TABFACT (Chen et al., 2020b), which is not a QA dataset. The dataset focuses on table-based fact detection. Inferences are regarded positive if they match corresponding table descriptions. All the above datasets are in English and questions/queries in these datasets are either SQL-style or uncontextually linked. (2019) propose a QA dataset DROP with numerical inference-type questions. As numbers provide important supporting information for financial statements, we create QA pairs involving numerical reasoning. Chinese machine reading comprehension datasets. Inspired by the well-known machine reading comprehension (MRC) dataset SQuAD (Rajpurkar et al., 2016), several Chinese MRC datasets have been also proposed (Cui et al., 2016, 2018, 2019b). He et al. (2018) build a large-scale open domain QA dataset, which annotates 200K queries from search engines. Jing et al. (2019) present a bilingual MRC dataset where parallel Chinese and English texts, questions and answers are provided. Sun et al. (2020) propose a free-form multiple-Choice Chinese machine reading Comprehension dataset C3. Unfortunately, none of these Chinese MRC datasets are in a conversational setting. ## 3 Dataset Creation This section elaborates how Tab-CQA is created, including details on table extraction, conversation collection and annotation. ## 3.1 Table Extraction We have collected nearly 30 years of annual financial reports of Chinese companies, listed in the major segments of the Shenzhen Stock Exchange and Shanghai Stock Exchange, covering 18 industry sectors, e.g., business, trade, power, retail, real estate and so on. First, for each year, we randomly select one company from each industry sector. Second, from each selected company, we randomly choose a financial report of that company. In this way, we have collected 6,661 reports for table extraction. As all reports are in PDF formats, we use the table extraction tool PDFlux1. Only tables where the number of cells is large than 15 and blank cells account for less than 30% of all cells are kept. Finally, we have extracted 7,041 tables. ## 3.2 Conversation Collection Manually-Generated Conversations. Once tables are extracted, we develop a conversation collection and annotation tool to collect a conversation and required annotations for each extracted table. We have 28 crowdsourced workers who can alternatively play as either a student to ask questions or a 1http://pdflux.com/ Datasets on numerical reasoning. Dua et al. teacher to provide answers. For each conversation, once they choose their roles, the chosen roles will be fixed until the conversation is completed. We train all crowdsourced workers in a pre-annotation phase. Only when they are quite familiar with the data collection protocol, they are allowed to participate in the formal conversation collection stage. Half of the crowdsourced workers have financial background while the other half do not have. Therefore, our collected conversations are mixed with financially professional and nonprofessional utterances. The collection tool has different user interfaces for the student and teacher. For the student, only the header rows, header columns and randomly selected cells of a given table are visible to him/her. Hence the student needs to ask questions step by step to understand the masked table. We encourage the student not to ask questions easily searchable from a given table. A variety of types of questions, e.g., table retrieval, muti-step reasoning, computation, numerical comparison, can be used by the student to help himself/herself to have a clearer understanding of the masked table in a conversation setting. The teacher is able to see the entire given table. Therefore the teacher needs to first judge whether a question raised by a "partially blind" questioner is answerable according to the information in the given table. Additionally, the teacher is also required to provide annotations of answer type and dialogue action to control the flow of a conversation, which will be introduced in the next subsection. In this way, we have obtained 2,463 conversations with 35,494 question-answer pairs. Automatically-Generated Conversations. We use a template-based method to automatically generate QA pairs. Specifically, we define 9 operation templates over extracted tables, as show in Appendix B. Then we randomly select an operation, perform it on a set of triplets extracted from tables. Each triplet consists of the cell value from the table and its row and column names. We define the triplet as < Rowi, Columni*, Cell*i >, where i is the index of arguments in predefined templates (i.e., i = 1 or 2). We randomly selected 100 manuallygenerated conversations. We then selected operations that occur more than 8 times as template operations. In total, the selected operations account for 84% of the selected samples. More details are Statistics Train Dev Test Table 6548 247 246 Avg.T Tokens 771.24 770.45 761.00 Question/Answer 101,884 3,601 3,604 Avg.Q Tokens 19.58 11.52 11.55 Avg.Turns 15.56 14.58 14.58 Table 2: Overall statistics of manual annotation of TabCQA. T: table. Q: question. ## In Appendix B. In the end, we have automatically generated 4,578 conversations with 73,595 question-answer pairs. ## 3.3 Conversation Annotation In order to have a deep understanding on the nature of collected answers and questions and a good control of conversation flow that allows the student and teacher to focus on a given table, our collection tool requires the teacher to do two types of conversation annotation on the teacher side. Appendix C provides details on how we control quality and diversity. Answer Type Annotation. As not all information of a given table is visible to the student, we do not ask the student to annotate the type of questions. On the contrary, the teacher can see the complete table. Hence the teacher knows what should be given as an answer and how the answer should be found. According to the nature of answers from extracted financial tables, we roughly divide them into three types: table retrieval, fact checking and computation. For table retrieval, answers can be found directly from a given table via simple reasoning. For fact checking, answers are yes or no according to the facts in the given table. For computation, answers are not directly from the given table, but they can be obtained by numerical reasoning over numbers in the given table, such as numerical comparison (e.g., finding the maximum, minimum, larger, smaller numbers from the table), arithmetic operations, and so on. The teacher is asked to annotate each answer with one of these three answer types. In addition, if there is no answer, the teacher needs to annotate "unanswerable". For automatically-generated questions, we annotate answer types according to Table 5, and if there is no cell value in the selected triplet, the answer type is "unanswerable". Conversation Flow Control Annotation. In ad- ![4_image_0.png](4_image_0.png) dition to providing answers and annotating answer types, the teacher is also in charge of conversation flow control (i.e., guiding the student to ask questions relevant to the given table). According to the relatedness of questions to the given table, the teacher will annotate each answer with a flow control tag: "good", "ok" or "unallowable". "good" suggests that the topic of the question in current conversation turn is related to the given table and questions from the same topic can be continued in future conversation turns. "ok" indicates that the current topic is ok but encouraged to be changed in future conversation turns. "unallowable" means that the current topic is not related to the given table and should be changed immediately. All the three flow control tags will be present to the student so that the student can ask appropriate questions in future conversation. We do not perform conversation flow control annotation during the automatically-generated process. ## 3.4 Overall Statistics Table 2 shows the overall statistics of Tab-CQA. From financial reports collected from Chinese listed companies in the last three decades, we randomly select 75 companies from 18 domains. We have extracted 7,041 tables and collected 109,089 question-answer pairs. The average numbers of tokens in extracted tables and questions are 770.85 and 19.01, respectively. The average number of turns in collected conversations is 15.49. We divide manually-generated data into the training, development and test set in the proportion of 8:1:1. To ensure that the development and test set is close to the real scenario, the automatically-generated data are used as a supplement to the training set. Further analyses of Tab-CQA are displayed in Appendix D. ## 4 Models For Tabular Cqa We propose two different models, namely textbased and operation-based model, as shown in Figure 1. For the text-based model, we convert each table into a piece of text, so the task is converted into a reading comprehension form. However, in practice this approach is difficult to effectively model numerical reasoning that is pervasive in our dataset. Therefore, we further propose an operation-based neural model that represent a table as a series of triplets. ## 4.1 Text-Based Model Partially inspired by MTMSN (Hu et al., 2019), we modify the output type of the text-based model to T able Retrieval, *F act Check* and Computation. We transform a table into a passage that is a table description sequence containing each cell in the table. We then concatenate the passage and question into an input sequence with [CLS] and [SEP] as in BERT (Devlin et al., 2019). This concatenated sequence is processed by L pretrained Transformer blocks: $\mathbf{H}_i=\text{TransformerBlock}\left(\mathbf{H}_{i-1}\right),\forall i\in[1,L]$. (1) We then use the contextualized token representations as the input to predict the type of answer as: $$\mathbf{p}^{\mathrm{type}}=\mathrm{softmax}\left(\mathrm{FFN}\left(\mathbf{h}^{\mathrm{CLS}}\right)\right)$$ CLS (2) For *T able Retrieval* questions, we calculate the probability of the beginning and ending positions of the answer fragment in the passage as: $$\begin{array}{l}{{\mathbf{p}^{\mathrm{start}}=\mathrm{softmax}\left(\mathbf{W}^{S}\mathbf{H}_{i}^{\mathrm{start}}\right),}}\\ {{\mathbf{p}^{\mathrm{end}}=\mathrm{softmax}\left(\mathbf{W}^{E}\mathbf{H}_{i}^{\mathrm{end}}\right)}}\end{array}$$ For *F act Check* questions, we consider it as a binary classification problem and calculate whether the problem description is true or not: $$\mathbf{p}^{\mathrm{fact\,check}}=\mathrm{softmax}\left(\mathbf{W}^{F}\mathbf{H}_{C L S}\right)$$ For *Computation* questions, we assign a computational sign to all numbers in the passage, i.e., +, −, 0. We then consider it as a ternary classification problem. For example, for a problem that requires summation, the numbers involved in the problem are assigned +, while other unrelated numbers are assigned 0: $$\mathbf{p}_{i}^{\mathrm{computation}}=\mathrm{softmax}\left(\mathbf{W}^{C}\mathbf{H}_{i}\right)\qquad(5)$$ i ∈ [1, n], n is the number of numbers in the passage. ## 4.2 Operation-Based Model This model consists of two sub-units for triplet prediction and operation prediction, respectively. For the triplet prediction unit, we consider it as a binary classification problem. We use outputs from a pretrained LM to estimate probabilities for the triplet detector: p triplet = softmax WT Hi (6) We then take questions and predicted triplets as input to the operation prediction unit and predict the desired operation. We consider it as an N-ary classification problem, where N is 9, the number of operation templates that have been defined in Table 5. The predicted probability is: $$\mathbf{p}^{\mathrm{operation}}=\mathrm{softmax}\left(\mathbf{W}^{O}\mathbf{H}_{i}\right)$$ (7) | Model | BERT | FIN | WWM | |----------|---------------|---------------|---------------| | T extX | 9.17 / 8.58 | 9.13 / 8.52 | 9.22 / 8.61 | | T ext∗ X | 16.17 / 15.62 | 14.29 / 14.13 | 16.18 / 15.70 | | OpX | 16.78 / 18.01 | 16.83 / 16.36 | 16.06 / 16.78 | | Op∗ X | 19.24 / 20.32 | 18.29 / 18.76 | 19.57 / 19.43 | $$(3)$$ The final answer is obtained based on the predicted triplets and operation together. For example, if the predicted triplet contains T1 and T2, and the operation is 4. It means to determine whether the value of T1 is bigger than the value of T2. ## 5 Experiments 5.1 Experimental Settings For the text-based model, we set the max sequence length, maximum query length and maximum answer length to 384, 64 and 30, respectively. The batch size was set to 10. For the operation-based model, we set the max sequence length to 32. The batch size was set to 3. All the optimizers were Adam with a learning rate of 5e-5. The number of Transformer layers for all PLMs is 12. Appendix E provides details on baseline models. ## 5.2 Results We used F1 (%) to evaluate the performance of different pretrained language models on our dataset. It should be noted that for the text-based approach, automatically-generated QA pairs can be used as additional data to the training set, while for the operation-based approach, only the automatically generated QA pairs can be used as training instances because manually labeled data lack the corresponding labels. Table 3 shows the experiment results of all models. The overall F1 of all these methods are much lower than those of neural models on other CQA datasets. This may be due to two reasons: pervasive numeric reasoning questions and 47.7% of questions involve either coreference or ellipsis, which make our Tab-CQA challenging for current neural models. The operation-based model is better than the text-based model as the $$(7)$$ ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) former is more suitable for numeric reasoning. All Op∗X models are better than OpX models, suggesting that maintaining a balance of positive and negative samples is good for final performance since only a small fraction of cells in tables are related to questions in Tab-CQA. ## 5.3 The Impact Of The Number Of Dialogue Histories On Performance Figure 2 shows the impact of conversation histories on model performance. It is important to note that the performance of the model does not increase with the number of conversation histories. A direct reason for this is the distance between the current question and the conversational history on which it relies in Tab-CQA. Valid contextual information is not available in the proximate conversation histories. When the number of conversation histories reaches a certain number, there is a slight performance increase followed by a decrease. This is because a certain number of conversation histories provide sufficient contextual information required for answering the current question. Additional conversation histories may bring noise to the model, we will investigate this issue further in the future. ## 5.4 Error Analysis We selected the best model (Op∗WWM ) on the development set to conduct an in-depth error analysis. We classify answer errors into four categories: Triplet Prediction Errors (TPE), Operation Prediction Errors (OPE), Insufficient Number of Cell Values (INCV), and Operation Error Outside of Predefined (OEOP). Specifically, TPE means that an irrelevant triplet is selected; OPE denotes that a wrong operation is predicted, e.g., an addition operation is predicted as a subtraction operation; INCV represents that correctly answering the question involves more triplets than our model setting, For example, to answer the question, "What is the number of jobs with the highest number of people in the company in 2010?", it is required to retrieve all relevant triplets and then compare them; and OEOP means that the actual operation is not predefined. For example, the correct answer to the question "Which year's operating income is more than 1 million" is the row name, even though the model has selected the correct triplet, but there is no correct operation to answer it correctly. We randomly selected 100 QA pairs and manually check the results for each cell in Op∗WWM according to the error type. Of these, 26 questions were answered correctly, and the remaining 36 questions with errors contained 44 TPEs, 7 OPEs, 6 INCVs and 17 OEOPs, as shown in Table 4. ## 6 Conclusions In this paper, we have presented Tab-CQA, a tabular conversational question answering dataset built from tables randomly extracted from annual financial reports of Chinese listed companies over the past three decades in 18 industry sectors. The dataset contains 7,041 tables, of which 2,463 tables are equipped with a manually collected conversation generated by crowdsourced workers playing the roles of students and teachers, another 4,578 tables are automatically generated according to templates. We have collected 109,089 QA pairs , covering table retrieval, fact checking and computation, 47.7% of which are associated with coreference or ellipsis. We propose two models for Tab-CQA, and the experimental results indicate that the operationbased model is better than the text-based model. ## Acknowledgements The present research was partially supported by Zhejiang Lab (No. 2022KH0AB01) and Huawei. We would like to thank the anonymous reviewers for their insightful comments. ## References Jon Ander Campos, Arantxa Otegi, Aitor Soroa, Jan Deriu, Mark Cieliebak, and Eneko Agirre. 2020. DoQA - accessing domain-specific FAQs via conversational QA. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7302–7314, Online. Association for Computational Linguistics. Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, and W. William Cohen. 2020a. Open question answering over tables and text. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and Yang William Wang. 2020b. Tabfact: A large-scale dataset for table-based fact verification. *ICLR*. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. 2020c. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 1026–1036, Online. Association for Computational Linguistics. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021. FinQA: A dataset of numerical reasoning over financial data. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019a. Pretraining with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2019b. A span-extraction dataset for Chinese machine reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5883–5889, Hong Kong, China. Association for Computational Linguistics. Yiming Cui, Ting Liu, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2018. Dataset for the first evaluation on Chinese machine reading comprehension. In *Proceedings of the Eleventh International* Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. 2016. Consensus attention-based neural networks for Chinese reading comprehension. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1777–1786, Osaka, Japan. The COLING 2016 Organizing Committee. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2018. DuReader: a Chinese machine reading comprehension dataset from real-world applications. In *Proceedings of the Workshop on Machine Reading* for Question Answering, pages 37–46, Melbourne, Australia. Association for Computational Linguistics. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1596–1606, Hong Kong, China. Association for Computational Linguistics. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In *Proceedings of the 55th* Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821– 1831, Vancouver, Canada. Association for Computational Linguistics. Yimin Jing, Deyi Xiong, and Zhen Yan. 2019. BiPaR: A bilingual parallel dataset for multilingual and cross-lingual reading comprehension on novels. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2452– 2462, Hong Kong, China. Association for Computational Linguistics. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2087–2097, Brussels, Belgium. Association for Computational Linguistics. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging Chinese machine reading comprehension. Transactions of the Association for Computational Linguistics, 8:141– 155. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511–4523, Florence, Italy. Association for Computational Linguistics. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. | IDX | Operation | Answer Type | |-------|--------------------------|-----------------| | 1 | Find() | Table Retrieval | | 2 | Find(max(find(),find())) | Table Retrieval | | 3 | Find(min(find(),find()) | Table Retrieval | | 4 | Bool(max(find(),find())) | Fact Check | | 5 | Bool(min(find(),find())) | Fact Check | | 6 | Bool(same(find(),find()) | Fact Check | | 7 | Bool(diff(find(),find()) | Fact Check | | 8 | Sum(find(),find()) | Computation | | 9 | Sub(find(),find()) | Computation | ## A An Example From Tab-Cqa Figure 3 is a tabular conversation question answering example from Tab-CQA. The upper part is a part of an extracted financial table while the bottom part shows a multi-round conversation with question-answer pairs on the table. Grey cells: invisible areas to students in data collection. Orange section: dialogue history. Blue section: current conversation turn. S/T denotes student/teacher. We also provide conversation flow tag and answer type (in brackets) to each answer. Better view in color. ## B The Details Of The Selection Procedure We define a triplet as < Rowi, Columni, Celli >, i ∈ (1, 2). The automatic process of QA generation is illustrated Figure 4. We first feed the row and column names from the two triples into the entity slots in the template. Next, we select words from a predefined verb list to fill the verb slots in the template based on known operations. For example, for a *F act Check* type question, we will randomly select some comparison verbs, such as "smaller than". As shown in Figure 4, after filling the slots, a complete sentence is generated to check whether the fact "2011 operating income is less than 2012 operating income" is true. To ensure that the questions contain contextual links to dialogue history, we make some transformations to questions in each set of dialogues. If the same entity occurs in the previous rounds of questions, it is replaced with a pronoun. If the content of both entity slots in the same triplet in the previous round of questions is the same, it is simply omitted. We create *Coreference* and *Ellipsis* in automatically generated QA pairs in this way. The answer will be generated based on the *Cell* value and specific operation will be selected when generating the question. A question is treated as unanswerable when the *Cell* is empty. Domains Train Dev Test Perc. (%) Accommodation 3705 257 126 3.8 Agriculture 6666 191 386 6.6 Building 8619 498 490 8.8 Comprehensive 2199 160 174 2.3 Culture 4067 292 303 4.3 Education 1992 120 178 2.1 Electricity 6550 303 507 6.8 Environment 3441 170 146 3.5 Estate 5832 90 78 5.5 Finance 12637 263 315 12.1 Health 3419 120 73 3.3 Leasing 2434 15 - 2.2 Manufacturing 14289 477 409 13.9 Mining 5180 121 86 4.9 Science 3627 - - 3.3 Software 4914 117 120 4.7 Transportation 5598 139 92 5.4 Wholesale 6715 268 121 6.5 Table 6: Statistics on domains. 9 operation templates are displayed in Table 5. ## C The Details Of Quality And Diversity Control We use a strict quality control process to ensure the quality of Tab-CQA. Before starting to annotate Tab-CQA, we trained 28 annotators to help them fully understand our annotation conventions and learn how to use our annotation system. Afterwards, we gave all annotators samples for pre-annotation and based on the pre-annotation results, we provided further explanations and training to ensure that the annotators could understand our goals. For each annotation completed, we asked both QA parties to swap roles for validation, including checking whether conversations are reasonable in context, whether answers are consistent with the table, and whether calculations are correct. If any errors were found, the annotators were asked to make corrections. When all annotations were completed, we selected ten annotators with good performance to perform a second round of checking of data checking. ## D Dataset Analysis D.1 Table Distribution Over Domains The distribution of extracted tables over these domains is displayed in the Table 6. The top 3 domains are manufacturing, finance and building. ![10_image_0.png](10_image_0.png) ## D.2 Question Type Distribution To further understand the types of questions and reasoning skills required to answer questions, we have randomly sampled 1000 questions from TabCQA for manual analysis. Table 7 shows the analysis results. The percentages of questions over the three types (i.e., table retrieval, fact checking, and computation) are the same as those of answers. For each type of questions, we further check if they are associated with discourse phenomena, such as co-reference (e.g., using pronouns to refer entities mentioned in previous conversation turns) and ellipsis (e.g., omitting entities from previous conversation turns). We calculate the percentages of discourse-related question types. In total, ordinary questions that are not contextually linked account for 52.3% while questions associated with coreference account for 15.3% and questions with ellipsis 32.4%. ## E Settings E.1 Baseline Models BERT: As BERT (Devlin et al., 2019) can be used in both span extraction QA tasks (Devlin et al., 2019) and MCQ tasks (Sun et al., 2020), we used a Chinese BERT trained on Chinese texts as our first PLM. FinBERT: FinBERT2is the first Chinese pretrained language model trained on financial texts based on the BERT architecture. FinBERT has achieved significant improvements in several downstream tasks in the finance domain over baselines. Since our dataset extracts tables from financial reports, we chose FinBERT as another PLM. BERT-wwm: BERT-wwm (Cui et al., 2019a) is a Chinese pre-trained model trained with full-word masking rather than subword masking. BERTwwm uses a corpus from Chinese Wikipedia, which contains 24M sentences. The vocabulary size is set as 21,128. We used both BERT-wwm as our PLM too. | <Row1 , Column1 , Cell1> | <Row2 , Column2 , Cell2> | | | | | | | | |----------------------------------------------------------------------------|----------------------------|-------------------------|------------------|-------|--------|------|--------|-------| | Triplet : | <2011年, 营业收入, 100> | <2012年, 营业收入, 200> | | | | | | | | Template: | Entity | prep | Entity | verbs | Entity | prep | Entity | verbs | | Sentence: | 2011年的营业收入 | 比 | 2012年的营业收入 | 少 | | | | | | The operating income in 2011 is smaller than the operating income in 2012? | | | | | | | | | Figure 4: Question generation templates. | Question Type | Percentage (%) | Discourse | Percentage (%) | Example | |-------------------------------------------------|------------------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------------------------------------------------------------------------| | Ordinary | 57.5 | 机器设备的年折旧率是多少? What is the annual depreciation rate of machinery and equipment? | | | | Table Retrieval | 81.5 | Coreference | 10.5 | 它的上期金额是多少? What was its prior period amount? | | Ellipsis | 32.0 | 比例是多少? What is the ratio? | | | | Ordinary | 38.5 | 营业税比城市维护建设税的本期数多吗? Is sales tax more than the current amount of city maintenance and construction tax? | | | | Fact Checking | 10.3 | Coreference | 43.3 | 它比期末数坏账准备大吗? Is it larger than the ending number of bad debt provision? | | Ellipsis | 18.2 | 比2017年的多吗? Is it more than in 2017? | | | | Ordinary | 17.1 | 本期增加最高的和最低的和是多少? What is the sum of the highest and lowest increase for the period? | | | | Computation | 8.2 | Coreference | 29.3 | 它们的和是多少? What is the sum of them? | | Ellipsis | 53.6 | 小了多少? How much smaller? | | | | Table 7: Question type distribution of Tab-CQA. | | | | |
lee-etal-2023-kosbi
{K}o{SBI}: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Applications
https://aclanthology.org/2023.acl-industry.21
Large language models (LLMs) not only learn natural text generation abilities but also social biases against different demographic groups from real-world data. This poses a critical risk when deploying LLM-based applications. Existing research and resources are not readily applicable in South Korea due to the differences in language and culture, both of which significantly affect the biases and targeted demographic groups. This limitation requires localized social bias datasets to ensure the safe and effective deployment of LLMs. To this end, we present KosBi, a new social bias dataset of 34k pairs of contexts and sentences in Korean covering 72 demographic groups in 15 categories. We find that through filtering-based moderation, social biases in generated content can be reduced by 16.47{\%}p on average for HyperClova (30B and 82B), and GPT-3.
# Kosbi**: A Dataset For Mitigating Social Bias Risks Towards Safer Large** Language Model Applications Hwaran Lee1,2,⋆ **Seokhee Hong**3*,⋆,♯* **Joonsuk Park**1,2,4 Takyoung Kim1,♯ Gunhee Kim3 **Jung-Woo Ha**1,2 1NAVER AI Lab 2NAVER Cloud 3Seoul National University 4University of Richmond {hwaran.lee, jungwoo.ha}@navercorp.com park@joonsuk.org seokhee.hong@vision.snu.ac.kr gunhee@snu.ac.kr youngerous@gmail.com ## Abstract Large language models (LLMs) learn not only natural text generation abilities but also social biases against different demographic groups from real-world data. This poses a critical risk when deploying LLM-based applications. Existing research and resources are not readily applicable in South Korea due to the differences in language and culture, both of which significantly affect the biases and targeted demographic groups. This limitation requires localized social bias datasets to ensure the safe and effective deployment of LLMs. To this end, we present KOSBI, a new social bias dataset of 34k pairs of contexts and sentences in Korean covering 72 demographic groups in 15 categories. We find that through filtering-based moderation, social biases in generated content can be reduced by 16.47%p on average for HyperCLOVA (30B and 82B), and GPT-3. ## 1 Introduction Large language models (LLMs) acquire impressive text generation abilities from large-scale real-world pre-training data (Brown et al., 2020; Kim et al., 2021). However, LLMs also absorb toxicity, such as social biases (Sheng et al., 2019; Wallace et al., 2019a). This cannot be overlooked since the risk of generating toxic content impedes the safe use and potential commercialization of various downstream applications, such as AI assistants (Dinan et al., 2022; Bai et al., 2022a). To minimize the harm, numerous studies have tackled the detection and mitigation of toxicity in LLMs (Blodgett et al., 2020; Ganguli et al., 2022). Each study typically leverages datasets capturing a specific type of toxicity, such as social bias (Sap et al., 2020; Nangia et al., 2020) or hate speech (Warner and Hirschberg, 2012; Lee et al., 2022). These datasets are not only task-specific but also language- and culture-specific. For instance, consider hate speech made in South Korea and in the United States. In addition to the language, the mainly targeted demographic groups also differfeminists and Korean Chinese in South Korea, as opposed to African Americans and Jewish in the United States (Jeong et al., 2022). Also, the existing toxicity datasets in Korean mostly focus on explicit hate speech and consider a limited number of targeted demographic groups (Moon et al., 2020; Yang et al., 2022; Kang et al., 2022; Lee et al., 2022). This calls for a dataset to address social biases against a more comprehensive set of demographic groups in South Korea so that as many groups and people are protected. Here we present the Korean Social Bias (KOSBI) dataset, a large-scale dataset of 34k pairs of contexts and sentences in Korean with labels mainly capturing the presence of social biases. 1It covers 72 targeted demographic groups in 15 categories, 2 which is much more comprehensive than existing datasets, as shown in Table 2. The categories include not only the common ones like gender and religion but also those especially relevant to South Korea—e.g., marital status and domestic area of origin, both of which consist of demographic groups that suffer from social biases in the country more commonly than others do. Given the difficulty of crawling from the web sufficient data for each of the 72 demographic groups, we leveraged HyperCLOVA (Kim et al., 2021) to generate the data with in-context few- | Dataset | # Inst. | Demographic Groups Data Source | Includes | Toxicity Labels | | | |----------------------------------------|-----------|----------------------------------|-----------------------------------|------------------------|-----------|--------------------------------------------------------| | # Cat. | # Groups | Context? | | | | | | BEEP! (Moon et al., 2020) | 9,341 | - | - | News comments | ✗ | Hate speech, Bias | | APEACH (Yang et al., 2022) | 3,770 | 10 | - | Human-written | ✗ | Offensive | | KOLD (Jeong et al., 2022) | 40,448 | 5 | 19 | News, YouTube comments | ✗ (Title) | Offensive | | HateScore, Unsmile (Kang et al., 2022) | 31,195 | 7 (mixed) | News, online community comments ✗ | Hate speech, Profanity | | | | K-MHaS (Lee et al., 2022) | 109,692 | 7 | - | News comments | ✗ | Hate speech, Profanity | | KOSBI (Ours) | 34,214 | 15 | 72 | LM-generated | ✓ | Biased (Stereotypes, Prejudice, Discrimination), Other | shot learning (Gao et al., 2021; Mishra et al., 2022). More specifically, we generated sentences and their respective contexts—which are also sentences, grammatically—for given target demographic groups. The generated contexts and sentences were then annotated by crowd workers as safe or *unsafe*. Here, *unsafe* contexts and sentences were further labeled as expressions of *stereotypes* (cognitive bias), *prejudice* (emotional bias), *discrimination* (behavioral bias), and/or *other*, adopting the taxonomy by Fiske (2023),3in Figure 1. With KOSBI, we mitigate social biases in LLMgenerated content using a filtering-based moderation approach, also known as rejection sampling (Ganguli et al., 2022). To do this, we first trained a safe sentence classifier using KOSBI. Then, for a given context, each LLM was used to generate a pool of sentences from which the safest sentence was chosen by the classifier. The human evaluation shows that social biases in generated content are reduced by 16.47% on average for all three models tested—HyperCLOVA (82B), HyperCLOVA (30B), and GPT-3. ## 2 Related Works Bias Mitigation in LLM-generated Content. LLMs are trained on real-world data, which often contains social biases toward certain demographic groups. This, in turn, induces biases in LLMs (Xu et al., 2021a). To date, various resources have been published to measure and mitigate such biases in LLMs (Sap et al., 2020; Nangia et al., 2020; Nadeem et al., 2021). Some of them are associated with specific tasks: *coreference resolution* to fight the phenomena like associating certain professions with a particular gender (Rudinger et al., 2018; Zhao et al., 2018), and *question answering* to prevent answers stereotyped toward certain bias categories like gender or socio-economic status (Li et al., 2020; Parrish et al., 2022). These resources 3For labeling the context, *prejudice* and *discrimination* were combined due to the limited number of instances. are not as effective for HyperCLOVA and other LLMs pre-trained on Korean corpora. Thus, we present a new resource in Korean, capturing the biases against prevalent demographic groups in South Korea. Also, our dataset covers a much more comprehensive set of demographic groups. Hate Speech Detection. Röttger et al. (2021) defines *hate speech* as "abuse that is targeted at a protected group or at its members for being a part of that group." Resources created to help detect hate speech can be used to reduce hate speech generated by LLMs, thereby reducing the harm they can incur. Note these resources use various names interchangeably for the most part, e.g., hate speech (Warner and Hirschberg, 2012), abusive language (Wiegand et al., 2019), and toxic language (Gehman et al., 2020; Hartvigsen et al., 2022). Also, quite a few resources are for safer dialogue (Sun et al., 2022; Xu et al., 2021b; Xenos et al., 2021; Kim et al., 2022). Meanwhile, to reflect different languages and societies, researchers have created and proposed hate speech corpora in Chinese (Deng et al., 2022), Dutch (Demus et al., 2022), and Arabic (Mubarak et al., 2022) Similar to the resources capturing social biases, these resources are not as useful for Korean LLMs due to the differences in language and culture. Luckily, several resources in Korean exist, as summarized in Table 1. However, these resources either unspecify or cover only a small subset of demographic groups in South Korea. More importantly, they focus on explicit profanity and otherwise offensive expressions. Our dataset instead targets cases that cannot be identified with specific keywords, such as expressions of stereotypes, discrimination, and prejudice (without explicit profanity) toward 72 demographic groups. Safety Alignment of Language Models. Beyond social biases and hate speech, various categories have been proposed recently to enhance the safety of language models, such as human values (Solaiman and Dennison, 2021; Kenton et al., 2021), ethical judgements (Hendrycks et al., 2021; Lourie et al., 2021), and moral norms (Forbes et al., 2020; Emelin et al., 2021). Then, alignment learning methods through human feedback (Bai et al., 2022a) or even by AI feedback (Bai et al., 2022b) have been proposed. Moreover, red-teaming (Perez et al., 2022; Ganguli et al., 2022) and adversarial attack (Wallace et al., 2019b) approaches have also been suggested to identify vulnerabilities in language models in terms of safety. We expect our dataset and comprehensive categories will be helpful for the safety alignment of Korean society. ## 3 The Kosbi **Dataset** This study aims to address social biases against a comprehensive set of demographic groups in South Korea so as to make LLMs safer for as many groups and people as possible. (Here, we focus on social biases without explicit hate speech, as existing datasets address the latter.) To achieve this, we wanted KOSBI to consist of context-sentence pairs labeled as safe or *unsafe* for the demographic groups mentioned in them; this way, we can train LLMs to behave safely in the context of discussing a demographic group, rather than simply avoid it. ## 3.1 Demographic Groups Compilation With the goal of covering a comprehensive list of demographic groups, we first compiled the list by combining categories derived from the Universal Declaration of Human Rights (UDHR) and the National Human Rights Commission of Korea (NHRCK)4, which prohibit discriminatory treatment based on social identity. (See Table 4 for the list of categories.) Then, we defined social groups in each category, considering the unique characteristics of Korean culture. For instance, we consider the most widely practiced religions in Korea, and also progressive and conservative political parties, rather than the Democratic and Republican parties in the U.S. (See Table 8 for the list of demographic groups.) ## 3.2 Raw Data Construction Since crawling from the web sufficient contextsentence pairs for every demographic group would be challenging, we generated them using 4Specifically, refer to provisions related to discriminatory acts in violation of equal rights - Article 2 Subparagraph 3 of the National Human Rights Commission Act, and Article 3 Paragraph 1 Subparagraph 1 of the Anti-Discrimination Act. | Categories | # Groups | |-------------------------------|------------| | Gender identity† | 3 | | Sexual orientation† | 1 | | Age & Generation† | 12 | | Race, Ethnicity, Nationality† | 11 | | Religion† | 6 | | Disability status† | 1 | | Physical appearance† | 4 | | Political orientation† | 3 | | Socio-economic status† | 3 | | Domestic area of origin | 8 | | Marital status | 6 | | Pregnancy & Birth | 4 | | Family form | 5 | | Criminal record | 2 | | Education, University, Major | 3 | | Total | 72 | Table 2: Category and demographic groups considered in KOSBI.† marks categories in both UDHR and NHRCK. Entire social groups are listed in Table 8. HyperCLOVA. LLMs are reported to have abilities to learn a given task from instructions and few-shot demonstration samples, which is referred to as incontext learning (Brown et al., 2020). With these abilities, previous research has proposed data synthesis methods by demonstration-based prompting methods (Gao et al., 2021; Mishra et al., 2022), wherein several sample sentences are listed in a prompt, and an LLM generates different ones with similar semantics. To construct KOSBI, we applied the demonstration-based prompting and generated pairs of context and sentence given a target social group using HyperCLOVA. The raw data construction was done in three-step: (1) building demonstration pools, which consist of initial labeled data; (2) generating contexts and sentences; (3) filtering out inappropriate generations by trainable classifiers The initial demonstration data was manually curated by authors and a few annotators, resulting in a relatively small pool of around 2165samples. This could limit the diversity of generation results and the accuracy of the filter models. To address this limitation, we incrementally generated the data by repeating steps 1-3 to update demonstration pools and re-trained the filtering classifiers after each iteration. The detailed prompts can be found in Appendix C. In the context prompt, the LLM is asked to produce "neutral contextual sentences" pertain-5In the initial demonstration pool, we collected three safe and three unsafe context-sentence pairs for each demographic group. The initial demonstration samples and all labeled generation data will be published. ing to the given social group. However, the model often generated biased sentences due to intrinsic bias. We labeled them as unsafe contexts. In the sentence generation case, we separated unsafe and safe demonstration pools and instructions for classconditional sentence generation. At the context filtering step, the filter model classified generated sentences pertaining the target demographics, and annotators only labeled wellconditioned outputs. In the sentence filtering step, on the other hand, we first over-generated sentences for each context, i.e., three sentences for each class. We then selected the most ambiguous sentence for a safe sentence classifier to label. The ambiguity was measured by the estimated max variability (Liu et al., 2022; Swayamdipta et al., 2020). Consequently, by excluding obvious and easy-to-learn samples in the dataset, this filtering process served to ensure that the constructed dataset has an appropriate level of difficulty. ## 3.3 Annotation The contexts and sentences were then labeled by crowd workers according to the following guidelines (See Figure 1 for examples): - **Context.** The role of the context is to represent a scenario in which an LLM needs to speak about a demographic group. Each generated context is first annotated as *safe* if it only contains objective information and thus does not cause harm to the targeted demographic group, and *unsafe*, otherwise. If labeled *unsafe*, it is further labeled as an expression of 1) *stereotypes* (cognitive bias), 2) prejudice (emotional bias), 3) *discrimination* (behavioral bias), and/or 4) *other*, adopting the taxonomy by Fiske (2023). Here, subclasses 2 and 3 are combined due to the rare | Context | Sentence | Train | Valid | Test | All | |-----------|------------|---------|---------|--------|--------| | Safe | 11,630 | 1,427 | 1,382 | 14,439 | | | Safe | Unsafe | 8,521 | 1,060 | 1,092 | 10,673 | | Total | 20,151 | 2,487 | 2,474 | 25,112 | | | Safe | 2,537 | 320 | 317 | 3,174 | | | Unsafe | Unsafe | 4,589 | 596 | 617 | 5,802 | | Total | 7,126 | 916 | 934 | 8,976 | | | Safe | 58 | 45 | 7 | 6 | | | Unsafe | 68 | 48 | 11 | 9 | | | Undecided | Total | 93 | 18 | 15 | 126 | | Total | 27,370 | 3,421 | 3,423 | 34,214 | | occurrences observed during a pilot study. - **Sentence.** Each sentence generated for a given context is first annotated as safe or *unsafe*, depending on whether or not it harms the targeted demographic group. If labeled unsafe, the sentence is further labeled as an expression of one of the bias types or other, same as above, except subclasses 2 and 3 are not combined this time. Note, a seemingly safe sentence may be *unsafe* dependent on its context. For instance, a sentence simply agreeing (e.g., "Yes, that is true.") to an unsafe context (e.g., "*[Demographic Group]* are always lazy.") is *unsafe*. In such cases, it is additionally marked as *(implicit)*, and *(explicit)* if the sentence is *unsafe* itself. To label the filtered outputs, 200 crowd workers affiliated across a wide range of social demographics were hired (Table 12). The detailed wellbeing information of workers can be found in Appendix C. They evaluated the qualities of contexts and sentences in terms of understandability and coherences between the pairs. Data that did not meet the criteria were excluded. They were then asked to label them. In particular, in the case of unsafe sentences, they were requested to find the social groups targeted in the context-sentence pair for explainability. The annotation guidelines are shown in Appendix H. In the human evaluation step, three crowd workers annotated contexts and sentences, and the final labels were decided by a majority vote. First, in labeling contexts as safe or unsafe, the innerannotator agreement by Krippendorff's α is 0.459 for binary (safe/unsafe) classes. The agreement is | Datasets | Models | Macro F1 (%) | |------------|-----------|----------------| | BEEP! | KcBERT | 52.90 | | APEACH | KcBERT | 48.82 | | KOLD | KLUE-BERT | 38.15 | | Hatescore | KcBERT | 40.28 | | Unsmile | KcBERT | 48.02 | | Ours | KLUE-BERT | 69.94 | | Ours | KcELECTRa | 71.21 | lower if we consider subclasses of unsafe contexts (α = 0.359). For sentence annotations, the α is 0.256 for labeling them as safe or unsafe. This suggests that determining the labels for the sentences is harder. This is expected given that both the context and the sentence need to be considered for labeling a sentence, whereas contexts are self-contained. ## 3.4 The Resulting Dataset KOSBI consists of 34,214 context-sentence pairs as summarized in Table 3. There are 25,112 (73.4%) and 8,976 (26.2%) of safe and unsafe contexts, respectively. Also, there are 17,619 (51.5%) and 16,484 (48.2%) safe and unsafe sentences. Training, validation, and test sets are randomly separated as 80%, 10%, and 10%, respectively, considering the balance of social group distribution. ## 4 Experimental Results To improve the safety of LLMs towards social groups, we explore a simple filtering-based moderation approach. In this section, we first build a safe sentence classification. Then we automatically evaluate LLMs' generation given a context with the safety classifier. Finally, we sample the safest sentence among over-generated sentence candidates. The efficacy of the filtering approach is demonstrated by human evaluation. 4.1 Safe Sentence Classification We train the safe sentence classifier by finetuning KLUE-BERT (Park et al., 2021) and KcELECTRa (Lee, 2021) 6. To identify unsafe sentences in context, the context and the sentence are concatenated and then fed into the models. We also simply augment data by using context data and their labels, resulting in the best macro-F1 of 71.21% as shown in Table 4. The performance implies that the proposed dataset is challenging. 6We used the latest version of the model: https:// huggingface.co/beomi/KcELECTRA-base-v2022. To validate the novelty of our dataset, we employed classifiers trained on previous Korean hate speech corpus: BEEP! (Moon et al., 2020), APEACH (Yang et al., 2022), KOLD (Jeong et al., 2022), Hatescore, and Unsmile (Kang et al., 2022) to evaluate the test set of our KOSBI 7. As a result, in Table 4, all classifiers consistently show poor accuracies, approximating a random guess. These poor performances strongly suggest that our dataset is dissimilar from existing corpora, mainly addressing Korean hate speech. Such difference may stem from the fact that our dataset includes instances of context dependent and implicit hate speech, which contrasts with the overt and explicit instances found in the previous corpus sourced from online platform comments. ## 4.2 Safety Evaluation Of Llms Using the safe sentence classifier, we measure safe generations of LLMs. Similarly to Gehman et al. (2020), we evaluate the models with two metrics. First, the empirical probability of generating a safe sentence (safe score is ≥ 0.5) given a context at least once over multiple k generations. We measure the probability with different sizes of k. The higher the safety probability, the more frequently filtermoderated outputs are safe. The second metric is the expected average safety score of safe sentences over k = 8 generations. This means that the higher the expected average safety, the model likely generates more safe sentences on average. We evaluate HyperCLOVA with different model sizes (6.9B, 13B, 30B, and 82B), and GPT-3 (175B) 8. We sample a subset of the test set to contain 30 contexts per each demographical category, i.e., a total of 450 contexts. The LLMs generate sentences given the contexts in a zero-shot generation setup. The prompt used for this experiment is listed in Appendix C. Table 5 presents the evaluation results. First, the empirical probability of generating safe sentences ![5_image_0.png](5_image_0.png) ![5_image_2.png](5_image_2.png) increases as generation increases for all LLMs. In other words, when the HyperCLOVA-82B generates 8 sentences per context, 88.7% of continuations are safe w.r.t the classifier model. Notably, the more over-generations, the more improved safety. Next, for the expected average of safety score, we could not find distinguished differences among different sizes of HyperCLOVA. Overall, GPT-3 shows more improved safety probability and score than HyperCLOVA by the automatic evaluations. Furthermore, we divide the results into those generated from a safe context and an unsafe context in order to measure how the safety of the context affects the model's continuation. As can be seen by comparing both results presented in Table 9, models generate more unsafe sentences when an unsafe context was given, while all models generate 99% of safe continuations when conditioned on a safe context in k = 8 settings. ## 4.3 Filter-Based Moderation We demonstrate the efficacy of filter-based moderation of unsafe sentence generation. The filtering approach samples the safest sentence among 8 generations. We conduct a human-evaluation experiment to assess the quality and safety of generation results. The evaluation results of the three models - GPT-3, HyperCLOVA 30B, and 82B are compared in Figure 2 and Table 6. With the filtering process, we find that the ra- ![5_image_1.png](5_image_1.png) tio of unsafe generations decreases for all models by 16.47%p on average. We observe that the filter-based moderation remarkably improves the safety of all LLMs by reducing unsafe generation as 16%, 15%, and 18.5% and by increasing safe sentences as 15.6%, 15.3%, and 18.7% for GPT3, 82B-HyperCLOVA, and 30B-HyperCLOVA, respectively. It is interesting that the ratio of the ambiguous sentences generated by GPT-3 does not decrease despite the filtering. Table 6 presents qualitative results of sentences generated by each model and the effects of the filterbased moderation. Inconsistent with the results in Figure 2, the filter-based moderation does not improve the quality of generated sentences. This means the filtering is likely to slightly sacrifice the coherency of generation by playing the role of constraints as a side effect against enhancing safety. However, overall quality scores of all LLMs are competitive enough, and HyperCLOVA presents better qualitative performance than GPT-3, consistent with the results in Figure 2. ## 4.4 Social Bias Mitigation Level By Category We analyze the moderation results by the 15 demographic categories. Before getting a result, we augmented the test set with additional annotated data to increase the number of samples per category and the reliability of the test results. As a result, our *augmented* test set consists of 6,801 (context, | Quality Assessments | | | | | | |------------------------------|-------------------|---------------------|-----------------------|-------------|------| | Grammatical | Understandability | Pertaning to Target | Context (%) Coherency | Overall (%) | | | Error-Free (%) | (%) | Social Group (%) | | | | | GPT-3 (175B) | 89.8 | 80.2 | 90.0 | 71.6 | 32.0 | | GPT-3 (175B) + filtering | 89.3 | 80.9 | 87.3 | 69.1 | 31.6 | | HyperCLOVA (80B) | 99.1 | 97.1 | 93.6 | 89.6 | 49.3 | | HyperCLOVA (80B) + filtering | 99.6 | 96.2 | 93.3 | 88.9 | 54.0 | | HyperCLOVA (30B) | 99.3 | 98.2 | 95.8 | 93.8 | 61.6 | | HyperCLOVA (30B) + filtering | 100 | 97.3 | 94.7 | 91.6 | 56.9 | sentence) pairs (see Table 10 for detailed statistics for it). For experiments conducted in this section, we sample a small subset from the augmented test set to contains at least 48 contexts per category, resulting in 1,746 contexts. All other settings follow of them in Sec 4.3. Figure 3 presents the human evaluation results of filter-based moderation by each demographic category. Each category displays a different ratio of generated safe sentences. By comparing with and without filter-based moderation, we can notice that the efficacy of the filtering process also varies. For example, we find the biggest increase of safe generations ratio in *Disability status* category (+64.0%) while the smallest in *Marital status* (+0.85%). Within the category, the differences also exist between models; such as in *Disability status* category, HyperCLOVA-82B got an increase of 33.3%p but HyperCLOVA-30B got only 4.1%p (See Figure 6 for the results by the group for all three models). Since filter-based moderation utilizes a filter model, it it natural to assume that there could appear to be a correlation between the performance of the filter model and the moderation efficacy. To identify any tendencies between the two, we have also included the accuracy of the filter model in Figure 3. We, however, couldn't find a strong correlation between them. We conjecture the reason is the relatively small differences in accuracy across the categories or the sampled set used here not being large enough. Further analysis is expected in future work. Despite this, the filter-based moderation approach demonstrates the effectiveness for all social demographic categories. It is crucial to scrutinize and improve the models' safety for fair consideration of each demographic category and group. ## 5 Conclusion To alleviate unsafe social bias of LLMs, we propose a large-scale social bias dataset related to safety addressing the Korean language and cultures, KOSBI. Our dataset covers 72 demographic groups in 15 categories, consisting of 34k of situation context and following sentence pairs. To construct KOSBI, we employ a human-LLM collaboration framework, where HyperCLOVA generates contexts and sentences, and human annotators label them as safe or unsafe. Extensive experiments present our dataset as differentiated from existing prevalent datasets on social bias and hate speech. Moreover, the results show the filter model trained with our dataset remarkably improves the ratio of generating safe sentences for various LLMs such as GPT-3 and HyperCLOVA with diverse model sizes, which presents the efficacy of our dataset. ## Limitations The proposed KOSBI addresses social bias based on Korean culture with the Korean language. This Korean-specific property might restrict the effectiveness of our dataset in Korea and its similar cultures. However, our dataset construction and evaluation protocol can contribute to a helpful guide for other research groups on AI safety to build the datasets for their cultures and languages. The performance of the filter models for harmless sentence classification in this study is not very competitive. We leave it as a future research topic to make a filter classifier with higher accuracy on our dataset because the goal of this study is not to make a strong social bias filter itself. ## Ethics Statement We expect that our KOSBI can considerably contribute to enhancing the safe usage of LLMs' applications by reducing risks caused by social bias. Constructing datasets on harmfulness is likely to cause stress on the contributors, such as human experts and crowd workers. To minimize their stress exposure, we use HyperCLOVA to generate contexts and sentences and ask humans to label them. Furthermore, our study was approved by the public institutional review board (IRB) affiliated with the Ministry of Health and Welfare of South Korea (P01-202211-01-016). ## Acknowledgements The authors would like to thank all committee members of the AI Ethics Forum for Human at NAVER, including Meeyoung Cha, Byoungpil Kim, Eun-Ju Lee, Yong Lim, Alice Oh, Sangchul Park, Woochul Park, Joonha Jeon, Jonghyun Kim, Do Hyun Park, and Eunjung Cho, for their constructive feedback and helpful discussions. We are also grateful to Ryumin Song, Jaehyeon Kim, and Jisun Kim at Crowdworks, who cooperated in the data collection process, and the 200 crowdworkers who participated in the process. In addition, the authors thank the research members of SNU-NAVER Hyperscale AI Center and KAIST-NAVER Hypercreative AI Center for discussion and thank Haksoo Ko and Yejin Choi for valuable discussion. This project is financially supported by NAVER Cloud. ## References Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022b. Constitutional ai: Harmlessness from ai feedback. Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5454– 5476, Online. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Christoph Demus, Jonas Pitz, Mina Schütz, Nadine Probol, Melanie Siegel, and Dirk Labudde. 2022. Detox: A comprehensive dataset for German offensive language and conversation analysis. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 143–153, Seattle, Washington (Hybrid). Association for Computational Linguistics. Jiawen Deng, Jingyan Zhou, Hao Sun, Chujie Zheng, Fei Mi, Helen Meng, and Minlie Huang. 2022. COLD: A benchmark for Chinese offensive language detection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11580–11599, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Emily Dinan, Gavin Abercrombie, A. Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2022. SafetyKit: First aid for measuring safety in open-domain conversational systems. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4113–4133, Dublin, Ireland. Association for Computational Linguistics. Denis Emelin, Ronan Le Bras, Jena D. Hwang, Maxwell Forbes, and Yejin Choi. 2021. Moral stories: Situated reasoning about norms, intents, actions, and their consequences. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 698–718, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Susan T. Fiske. 2023. Prejudice, discrimination, and stereotyping. Noba textbook series: Psychology. DEF Publisher. http://noba.to/jfkx7nrd. Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 653–670, Online. Association for Computational Linguistics. Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 3356–3369, Online. Association for Computational Linguistics. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3309–3326, Dublin, Ireland. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Steinhardt. 2021. Aligning AI with shared human values. In *International Conference on Learning Representations*. Younghoon Jeong, Juhyun Oh, Jongwon Lee, Jaimeen Ahn, Jihyung Moon, Sungjoon Park, and Alice Oh. 2022. KOLD: Korean offensive language dataset. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 10818–10833, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. TaeYoung Kang, Eunrang Kwon, Junbum Lee, Youngeun Nam, Junmo Song, and JeongKyu Suh. 2022. Korean online hate speech dataset for multilabel classification: How can social science improve dataset on hate speech? Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. 2021. Alignment of language agents. Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Jeon Dong Hyeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, Woomyoung Park, and Nako Sung. 2021. What changes can large-scale language models bring? intensive study on HyperCLOVA: Billions-scale Korean generative pretrained transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3405–3424, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, and Maarten Sap. 2022. ProsocialDialog: A prosocial backbone for conversational agents. In *Proceedings* of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4005–4029, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jean Lee, Taejun Lim, Heejun Lee, Bogeun Jo, Yangsok Kim, Heegeun Yoon, and Soyeon Caren Han. 2022. K-MHaS: A multi-label hate speech detection dataset in Korean online news comment. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3530–3538, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Junbum Lee. 2020. Kcbert: Korean comments bert. In *Proceedings of the 32nd Annual Conference on* Human and Cognitive Language Technology, pages 437–440. Junbum Lee. 2021. Kcelectra: Korean comments electra. https://github.com/Beomi/KcELECTRA. Tao Li, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Vivek Srikumar. 2020. UNQOVERing stereotyping biases via underspecified questions. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3475–3489, Online. Association for Computational Linguistics. Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022. WANLI: Worker and AI collaboration for natural language inference dataset creation. In *Findings of the Association for Computational* Linguistics: EMNLP 2022, pages 6826–6847, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Nicholas Lourie, Ronan Le Bras, and Yejin Choi. 2021. Scruples: A corpus of community ethical judgments on 32,000 real-life anecdotes. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13470–13479. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task generalization via natural language crowdsourcing instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3470–3487, Dublin, Ireland. Association for Computational Linguistics. Jihyung Moon, Won Ik Cho, and Junbum Lee. 2020. BEEP! Korean corpus of online news comments for toxic speech detection. In *Proceedings of the Eighth* International Workshop on Natural Language Processing for Social Media, pages 25–31, Online. Association for Computational Linguistics. Hamdy Mubarak, Sabit Hassan, and Shammur Absar Chowdhury. 2022. Emojis as anchors to detect arabic offensive language and hate speech. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Ji Yoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Lucy Park, Alice Oh, Jung-Woo Ha (NAVER AI Lab), Kyunghyun Cho, and Kyunghyun Cho. 2021. Klue: Korean language understanding evaluation. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1. Curran. Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In *Findings of the Association for Computational* Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red teaming language models with language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3419–3448, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Paul Röttger, Bertie Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2021. HateCheck: Functional tests for hate speech detection models. In *Proceedings of the 59th Annual Meeting of the Association for Computational* Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 41–58, Online. Association for Computational Linguistics. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Irene Solaiman and Christy Dennison. 2021. Process for adapting language models to society (palms) with values-targeted datasets. In *Advances in Neural Information Processing Systems*, volume 34, pages 5861– 5873. Curran Associates, Inc. Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, and Minlie Huang. 2022. On the safety of conversational models: Taxonomy, dataset, and benchmark. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 3906–3923, Dublin, Ireland. Association for Computational Linguistics. Human Language Technologies, pages 2950–2968, Online. Association for Computational Linguistics. Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9275–9293, Online. Association for Computational Linguistics. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019b. Universal adversarial triggers for attacking and analyzing NLP. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. William Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In *Proceedings* of the Second Workshop on Language in Social Media, pages 19–26, Montréal, Canada. Association for Computational Linguistics. Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602–608, Minneapolis, Minnesota. Association for Computational Linguistics. Alexandros Xenos, John Pavlopoulos, Ion Androutsopoulos, Lucas Dixon, Jeffrey Sorensen, and Leo Laugier. 2021. Toxicity detection can be sensitive to the conversational context. Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein. 2021a. Detoxifying language models risks marginalizing minority voices. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2390–2397, Online. Association for Computational Linguistics. Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2021b. Bot-adversarial dialogue for safe conversational agents. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Kichang Yang, Wonjun Jang, and Won Ik Cho. 2022. APEACH: Attacking pejorative expressions with analysis on crowd-generated hate speech evaluation datasets. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 7076–7086, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics. | A | The KOSBI Dataset | Category | Social Group Male | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------|------------|-----| | Gender identity† | Female Others | | | | | | Sexual orientation† | Homosexual | | | | | | A.1 | Domain and Categories of Social Demographics | | | | | | The | entire | social | demographic | categories | and | | groups are listed in Table 8. | Baby Childern Teenagers Young people Middle-aged Old people Baby bommeres 386 Generation Generation X Milennials Generation Z Alpha Generation | | | | | | A.2 | Example Data | Age & Generation† | South Korean North Korean Chinese Japanese American (U.S.) Russian Asian African European Americans, Oceanians People of color / White | | | | Race, Ethnicity & Nationality† | Nonreligious Protestantism Buddhism Catholic Islam Others | | | | | | Disability status† | Disability | | | | | | Figure 4: Example pairs of a context and a sentence with labels pertaining to a given social demographic category and group. Note, "금수저" is a Korean buzzword, roughly meaning "Silver spoon" or "Privileged background" in English. | Religion† | Face Appearance Body Type | | | | | Physical appearance† | Sexual Appearance Others | | | | | | A.3 | Details of Unsafe Label | Liberal | | | | | Political orientation† | Conservative Others | | | | | | Unsafe sub-labels | # data | | | | | | Stereotypical | 4,719 | | | | | | Prejudice / Discrimination | 407 | | | | | | Other | 1,590 | | | | | | Undefined | 2,260 | Employment Type | | | | | Socio-economic status† | Economic Condition Occupation | | | | | | Context | Seoul Gyeonggi-do Chungcheong-do Gyeongsang-do Jeolla-do Jeju-do Gangwon-do Korean residents from abroad | | | | | | Stereotypical | 8,197 | | | | | | Prejudice | 1,085 | | | | | | Discrimination | 655 | | | | | | Other | 336 | | | | | | Undefined | 6,905 | Domestic area of origin | | | | | Sentence | Married Single Divorced Remarriaed Common-law marriage Others | | | | | | Marital status | | | | | | | Table 7: Distribution of the unsafe sub-labels of context and sentence. Undefined represents cases where three annotators could not decide the label through major voting, but 2 or more annotators chose one of the unsafe sub-labels. | Pregnant Person Person Who Gave Birth | | | | | | Pregnancy & Birth | Pregnancy Before Marriage Others Foster Family Broken Family | | | | | | Family form | Multicultural Family Step Family Others | | | | | | Criminal record | Violent Criminal Convictions Others | | | | | | Education & University & Major | Academic Background Universities Major | | | | | | Table 8: Category and demographic groups considered | | | | | | Table 8: Category and demographic groups considered in KOSBI. The categories in both UDHR and NHRCK are marked by †. ## B Hyperclova Generation Hyperparameters We use the same hyperparameters for generating contexts and sentences: top-p sampling with p = 0.8, a temperature of 0.5, repeat penalty of 5, and stop word of "\n". We gradually increased top-p and temperature every time we met duplication generations. ## C Prompts Of Llms Context Generation Prompt Among 10 demonstration samples, we collect 5 demonstrations from the pool which has the same demographic category with what we are generating and 3 demonstrations from the pool which has the same demographic group. Detailed prompt is below: 아래 사회적 집단에 속하는 사람에 대해 특정한 상황이 나 전반적인 현상을 중립적으로 작성하세요. ⇒ *General* Instruction {category} 분 류 에 따 라 {social group}에 속 하 는 사 람에 대해서 작성 ⇒ *Category and demographic group* 중립적 상황 문장 : {context} \#\#\# ... \#\#\# {category} 분류에 따라 {social group}에 속하는 사람에 대 해서 작성 중립적 상황 문장 : Safe Sentence Generation Prompt Among 10 demonstration samples, we collect 5 demonstrations from the pool which has the same context label with what we are generating. 아래 사회적 집단에 속하는 사람에 대해 "중립적 상황 문장" 뒤에 이어지는 자연스러운 "다음 문장"을 작성하 세요. "다음 문장"은 해당 사회적 집단에 대해 고정관 념과 편견이 없는 사람이 표현할 수 있는 안전한 문장 입니다. 비하와 차별의 의도가 없고, 공정하며 고정관 념적이지 않은 표현, 다양성을 존중하고, 인간의 존엄 성과 자유와 기본권을 보장합니다. ⇒ *General Instruction* {category} 분 류 에 따 라 {social group}인 사 람 에 대 해서 작성 ⇒ *Category and demographic group* 중립적 상황 문장 : {context} 다음 문장 : {sentence} \#\#\# ... \#\#\# {category} 분류에 따라 {social group}에 속하는 사람에 대 해서 작성 ⇒ *Category and demographic group* 중립적 상황 문장 : {context} 다음 문장: Unsafe Sentence Generation Prompt All process is same with safe sentence generation except the prompt below: 아래 사회적 집단에 속하는 사람에 대해 "중립적 상황 문 장" 뒤에 이어지는 자연스러운 "다음 문장"을 작성하세 요. "다음 문장"은 해당 사회적 집단에 대해 고정관념과 편견을 가진 사람이 표현할 수 있는 위험한 문장입니다. 비하와 차별의 의도가 있고, 불공정하고 논란을 불러 일 으킬 수 있으며, 사회적 집단 내 다양성을 해칠 수 있습 니다. ⇒ *General Instruction* {category} 분 류 에 따 라 {social group}인 사 람 에 대 해서 작성 ⇒ *Category and demographic group* 중립적 상황 문장 : {context} 다음 문장 : {sentence} \#\#\# ... \#\#\# {category} 분류에 따라 {social group}에 속하는 사람에 대 해서 작성 ⇒ *Category and demographic group* 중립적 상황 문장 : {context} 다음 문장: Zero-shot Prompt for Human-evaluation The following prompt is used for the human-evaluation in Section 4.3 아래 사회적 집단에 속하는 사람에 대해 "상황 문장" 뒤 에 이어지는 자연스러운 "다음 문장"을 작성하세요. \#\# {category} 분류에 따라 "{social group}"에 속하는 사람에 대해서 작성 상황 문장: {context} 다음 문장: ## D Modeling Details All the training processes described below are under PyTorch-Lightning9and Huggingface10 environments. For training, the search space for hyperparameters is: $${\mathrm{learning~rate:}}\;[1e-5,2e-5,3e-5,4e-5]$$ $$32,48]$$ - batch size : [32, 48] - gradient clipping value : [0.0, 1.0] - epoch : 15 - early stopping : after 5 epochs without improvement ## D.1 Context Filter Models We use KcELECTRa (Lee, 2021) as a backbone model for our context filter model. The demographic group and the context concatenated by the separate token([SEP]) are fed to the model to train the model to predict whether the demographic group is in the context text. 3,819 and 7,569 data points are used for training after iterations 1 and 2, 9https://www.pytorchlightning.ai/ 10https://huggingface.co/ respectively (80/10/10 split). The best configuration is 5e − 5 learning rate, 48 batch size, and 0.0 gradient clipping value for both iterations 1 and 2, showing 83.51% and 90.75% of accuracy for each test set, respectively. ## D.2 Next Sentence Filter Models We also use KcELECTRa as a backbone model for our next sentence filter model. Note that the main purpose of the next sentence filtering process is to leverage the filter model to collect the most ambiguous samples w.r.t the model. The separate token concatenates the context and the next sentence, and the model is trained to predict the unsafeness of the text. 4,324 and 11,457 data points are used for training after iterations 1 and 2, respectively (80/10/10 split). The best hyperparameter setup is (5e − 5 learning rate, 32 batch size, 0.0 gradient clipping value) and (2e − 5 learning rate, 48 batch size, 0.0 gradient clipping value) for iterations 1 and 2, respectively. The accuracies of the best models are 83.83% (iteration 1) and 69.37% (iteration 2). Due to ambiguous data points being augmented for iteration 2, the later model shows lower accuracy. ## D.3 Safe Sentence Classifiers After collecting all data points, we train a safe sentence classifier. In addition to the KcELECTRa model, we use KLUE-BERT (Park et al., 2021) and KcBERT (Lee, 2020) as candidates. As mentioned in Section 4.1, we augment data by using context data. Among six configurations which consist of three models and two datasets (with and without augmentation), the best model is KcELECTRa with augmentation (71.22% accuracy). The hyperparameter setup is 1e − 5 learning rate, 32 batch size, and 0.0 gradient clipping value. ## E Safety Evaluations Of Continuations Table 9 shows the safety generation results given safe and unsafe contexts, respectively. As can be seen by comparing both results, models generate more unsafe sentences when an unsafe context is given, while all models generate 99% of safe continuations when conditioned on a safe context in k = 8 settings. | Model | Safety Probability | Exp. Avg. Safety | | | | |-----------------------------|----------------------|--------------------|------|------|-------------| | k = 1 | 2 | 4 | 8 | | | | Safe Context GPT-3 (175B) | .931 | .961 | .984 | .993 | .674 ± .083 | | HyperClova (6.9B) | .806 | .931 | .977 | .993 | .626 ± .103 | | HyperClova (13B) | .766 | .918 | .974 | .990 | .642 ± .108 | | HyperClova (30B) | .809 | .941 | .977 | .990 | .647 ± .102 | | HyperClova (82B) | .829 | .918 | .967 | .993 | .660 ± .106 | | Unsafe Context GPT-3 (175B) | .644 | .753 | .870 | .918 | .522 ± .082 | | HyperClova (6.9B) | .432 | .616 | .740 | .842 | .469 ± .093 | | HyperClova (13B) | .507 | .616 | .767 | .849 | .473 ± .099 | | HyperClova (30B) | .363 | .514 | .603 | .712 | .443 ± .073 | | HyperClova (82B) | .342 | .493 | .651 | .788 | .441 ± .093 | ![13_image_0.png](13_image_0.png) | Context | Safe | Unsafe | Undecided | Total | | | | | | | |-----------------|--------|----------|-------------|---------|-------|-----|----|----|-------|-------| | Sentence | Safe | Unsafe | Total | S. | U. | T. | S. | U. | T. | | | Test set | 1,382 | 1,092 | 2,474 | 317 | 617 | 934 | 7 | 11 | 15 | 3,423 | | Augmented 2,681 | 2,268 | 4,949 | 589 | 1,239 | 1,828 | 11 | 13 | 24 | 6,801 | | Table 10: The number of instances for the test and ![13_image_1.png](13_image_1.png) augmented test sets. ## F Results And Analyses On **Augmented** Test Set As mentioned in Sec 4.4, we augmented our test set with additional annotated data to increase the reliability of test results. As a result, the augmented test set has 6,801 data points (See Table 10). Among them, we randomly sampled 1,746 contexts for the human-evaluation experiments, which is the same procedure described in Sec 4.3. As seen in Figure 5, we can still observe that the filter-based moderation reduces unsafe generations for all three models. Table 11 presents qualitative results for another subset of the test set. | Quality Assessments | | | | | | |------------------------------|-------------------|---------------------|-----------------------|-------------|------| | Grammatical | Understandability | Pertaning to Target | Context (%) Coherency | Overall (%) | | | Error-Free (%) | (%) | Social Group (%) | | | | | GPT-3 (175B) | 84.4 | 77.4 | 87.3 | 70.8 | 30.2 | | GPT-3 (175B) + filtering | 86.4 | 79.4 | 86.5 | 71.1 | 30.1 | | HyperCLOVA (80B) | 98.9 | 97.9 | 93.9 | 90.5 | 56.5 | | HyperCLOVA (80B) + filtering | 99.3 | 97.5 | 92.5 | 88.9 | 56.0 | | HyperCLOVA (30B) | 99.0 | 98.3 | 95.4 | 93.0 | 62.6 | | HyperCLOVA (30B) + filtering | 99.1 | 97.9 | 93.6 | 91.8 | 60.0 | Table 11: Human evaluation on the subset of augmented test set. Following the Table 6, comparisons between ![14_image_0.png](14_image_0.png) unfiltered responses and filtered responses among 8 generations from GPT-3 (175B; 'text-davinci-003'), HyperClova (82B and 30B) are shown. ## G Social Bias Mitigation Level By Category Figure 6 shows all results with and without the filter-based moderation for GPT-3 (175B), HyperCLOVA (82B), and HyperCLOVA (30B). Although the increment of safety does not strongly correlate to the performance of the classifier, the filter-based moderation approach demonstrates the effectiveness for all social demographic categories. It is crucial to scrutinize and improve the models' safety for fair consideration of each demographic category and group. ## H Human Annotation H.1 Crowd Worker Compensation We utilized one of the representative crowdsourcing platforms in South Korea. Among all applicants to our project, we selected 200 crowd workers. All workers have received reasonable monetary compensation; 80 KRW per sub-single question. All workers are expected to finish 2∼3 sub-single questions in one minute, resulting in the minimum compensation is 9,600 KRW/hour. For reference, the minimum hourly wage in South Korea is 9260 KRW in 2023. The annotation guidelines and the interface is depicted in Figure 7 and Figure 8. ## H.2 Annotation Demographics The detailed demographics are presented in Table 12. Note that every single data was annotated by two females and one male or vice versa. ![15_image_1.png](15_image_1.png) Table 12: Demographics of the crowd workers. ![15_image_0.png](15_image_0.png) ![16_image_0.png](16_image_0.png)
yang-etal-2023-improving
Improving Knowledge Production Efficiency With Question Answering on Conversation
https://aclanthology.org/2023.acl-industry.22
Through an online customer service application, we have collected many conversations between customer service agents and customers. Building a knowledge production system can help reduce the labor cost of maintaining the FAQ database for the customer service chatbot, whose core module is question answering (QA) on these conversations. However, most existing researches focus on document-based QA tasks, and there is a lack of researches on conversation-based QA and related datasets, especially in Chinese language. The challenges of conversation-based QA include: 1) answers may be scattered among multiple dialogue turns; 2) understanding complex dialogue contexts is more complicated than documents. To address these challenges, we propose a multi-span extraction model on this task and introduce continual pre-training and multi-task learning schemes to further improve model performance. To validate our approach, we construct two Chinese datasets using dialogues as the knowledge source, namely cs-qaconv and kd-qaconv, respectively. Experimental results demonstrate that the proposed model outperforms the baseline on both datasets. The online application also verifies the effectiveness of our method. The dataset kd-qaconv will be released publicly for research purposes.
# Improving Knowledge Production Efficiency With Question Answering On Conversation Changlin Yang, Siye Liu, Sen Hu, Wangshu Zhang ## Teng Xu, Jing Zheng Ant Group {changlin.ycl,siye.lsy,hs272483, wangshu.zws}@antgroup.com {harvey.xt,jing.zheng}@antgroup.com ## Abstract Through an online customer service application, we have collected many conversations between customer service agents and customers. Building a knowledge production system can help reduce the labor cost of maintaining the FAQ database for the customer service chatbot, whose core module is question answering (QA) on these conversations. However, most existing researches focus on documentbased QA tasks, and there is a lack of researches on conversation-based QA and related datasets, especially in Chinese language. The challenges of conversation-based QA include: 1) answers may be scattered among multiple dialogue turns; 2) understanding complex dialogue contexts is more complicated than documents. To address these challenges, we propose a multi-span extraction model on this task and introduce continual pre-training and multi-task learning schemes to further improve model performance. To validate our approach, we construct two Chinese datasets using dialogues as the knowledge source, namely *ant-qaconv* and kd-qaconv, respectively. Experimental results demonstrate that the proposed model outperforms the baseline on both datasets. The online application also verifies the effectiveness of our method. The dataset *kd-qaconv*1 will be released publicly for research purposes. ## 1 Introduction With the rapid advance of Natural Language Processing (NLP), customer service chatbots have been widely applied in industries, as they can significantly reduce the cost of human customer service. Retrieval-based question answering model often plays an essential role in a chatbot system. However, building and maintaining a high-quality Frequently Asked Question (FAQ) database is laborintensive, which relies on human experts to produce the answers. In *Alipay*'s online customer ser1https://github.com/yclzju/kd-qaconv vice system, there are many dialogues between customers and service agents collected daily, which contain customer questions and corresponding answers. To utilize these data, we design a knowledge production system, as depicted in Figure 1, to improve the efficiency of building the FAQ database. Our system extracts appropriate answers from the retrieved conversations for user questions that the chatbot cannot answer due to lacking relevant QA pairs in the FAQ database. After updating the FAQ database with produced QA pairs, the chatbot can answer the user questions correctly. The core of our system is the *QA on conversations* module, which extracts the answer from the candidate dialogues for a given question. ![0_image_0.png](0_image_0.png) Current QA researches mainly focuses on document-based QA tasks, such as SQuAD (Rajpurkar et al., 2016), or conversational QA tasks (Reddy et al., 2019), which need to answer sequential dialogue-like questions based on the understanding of the given document, instead of QA on conversation. There are only a few relevant datasets, such as FriendsQA (Yang and Choi, 2019), Molweni (Li et al., 2020), QAConv (Wu et al., 2022), whose dialogues are in English and col225 lected from emails, TV shows, with the focus mainly on multi-party dialogues and speaker information. In contrast, our main objective is to extract knowledge from two-party dialogues, which are more common in the customer service industry. Therefore, in this work, we construct *ant-qaconv*, a QA dataset consisting of human customer service dialogues. And to better validate our proposed method, we build another dataset, *kd-qaconv*, based on the public dataset *kdconv* (Zhou et al., 2020). The main challenges for our task, as reflected in the datasets, include: 1) knowledge information can be distributed in multiple turns rather than in a single sentence in a document, which means the answer can comprise multiple spans of text, 2) it's difficult to model the hierarchical structure of a complex dialogue. Taking Figure 1 as an example, to answer the question "How long is the waiting period for the E-Life Medical Insurance?", the QA model must understand the context, including clarifications and co-references, then extract the answer from multiple turns of the dialogue. To address these challenges, based on a span-based machine reading comprehension model, we introduce a tag-based module to handle the multi-span challenge and propose a key utterance selection auxiliary task and continual pre-training to improve dialogue modeling. Experimental results show improved accuracy over baseline models from the proposed approach. Our main contributions can be summarized as follows: - We design a knowledge production system based on the QA on conversation module, which improves the efficiency of maintaining the FAQ database for a chatbot. - We introduce a construction pipeline and release the resulting dataset *kd-qaconv*, which, to the best of our knowledge, is the first public Chinese dataset for QA on conversation. - We apply a multi-span extraction model and further improve its performance through continual pre-training and multi-task learning, and validate the approach's effectiveness through extensive experiments on two datasets. ## 2 Related Work 2.1 Pre-Trained Language Models Pre-trained language models (PLMs) such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and AlBERT (Lan et al., 2019) have achieved stateof-the-art performance on many NLP tasks, including question answering (QA) tasks. These models are based on a self-attention mechanism and Transformer architecture (Vaswani et al., 2017) and are pre-trained on large corpora, which enables them to encode texts into contextualized representations. However, most of these models are pre-trained on general domain textual corpora, such as Wikipedia, News, or Books, which can be notably different from the writing style and domain-specific language used in customer service conversations. As a result, many researchers (Gururangan et al.) have found that training PLMs on domain-related dialog corpora can help improve the model's performance on dialogue-related and domain-specific downstream tasks. ## 2.2 Question Answering Question answering (QA) (Hirschman and Gaizauskas, 2001) is one of the most widely researched NLP tasks, which aims at providing correct answers to questions based on the given knowledge source. Considering that dialogue is one of the primary forms of interaction and significantly different from the document, some researchers propose using dialogue as a knowledge source, named QA on conversation (Wu et al., 2022; Yang and Choi, 2019; Li et al., 2020). There are several unique challenges to QA on conversation: 1) information is scattered across multiple dialogue turns; 2) co-reference resolution is more difficult for understanding dialogues than documents. To alleviate such difficulties, Li and Zhao (2021) design self-supervised tasks on speaker prediction and key-utterance detection to capture salient information in long dialogues. Li and Choi (2020) propose several dialogue pre-training tasks, including utterance order prediction and mask language modeling, to learn both token and utterance embeddings to understand dialogue contexts better. ## 3 Dataset The data collection pipeline (shown in Figure 2) includes three stages as follows: ## 3.1 Dataset Pre-Processing For *ant-qaconv*, we sample 4193 dialogues from the online customer service of our company and remove the personally identifiable information and ![2_image_0.png](2_image_0.png) Figure 2: The data collection pipeline non-text elements such as emojis and pictures. For *kd-qaconv*, we choose *kdconv*, a public Chinese knowledge-driven conversation dataset about film, music, and travel, as the original dialogue data source. ## 3.2 Question Collection We use three strategies to ensure the efficiency of the collection and quality of the candidate questions for each dialogue. Question generation: Synthetic dataset construction has been proven effective in building robust and complex datasets (Feng et al., 2021). For ant-qaconv, we train the question generator (QG) with BART(Lewis et al., 2020) 2 on Dureader (He et al., 2018) and select one candidate answer extracted by an internal extractive dialogue summary model for each dialogue to generate candidate questions. Since each conversation in *kdconv* is annotated with multiple knowledge graphs (KG) triples, we train a question generator on KgCLUE3and randomly select two KG triples for each dialogue as the input of the QG model to generate candidate questions. Furthermore, we use a machine reading model trained from document datasets to filter out those generated questions with the predicted answer being used in QG since we suspect these questions may be too easy. Question retrieval: For *ant-qaconv*, we also use BM25 (Robertson et al., 1995) to retrieve the most similar question from internal FAQ database as candidate question. Human rewriting: For each dialogue, we ask the annotators to rewrite or remove the candidate questions with syntactic or semantic errors and try to write a new question that is different from the candidate question to ensure diversity of the questions. ## 3.3 Answer Labeling For each sample, we ask internal annotators to read the questions and the dialogues and then label the answers, which can be nonexistent, a single text span, or multiple non-contiguous text spans in the dialogues. ## 3.4 Unanswerable Questions Previous work (Rajpurkar et al., 2018) show that unanswerable questions can force the model to decide whether a dialogue entails that a span of text can answer the question. To increase the number of unanswerable questions, we randomly sampled questions, and the annotators verified whether a text span was able to answer the selected question. ## 3.5 Dataset Statistics | ant-qaconv | kd-qaconv | | |-----------------------|-------------|------| | dialogues | 3895 | 4500 | | avg turns | 18 | 19 | | avg dialogue's length | 416 | 396 | | questions | 6550 | 9384 | | - no-answer | 855 | 769 | | - single-span | 5262 | 7592 | | - multi-span | 433 | 1023 | | avg answer's length | 42 | 13 | | std answer's length | 58 | 15 | Table 1: Dataset statistics of *kd-qaconv* and *ant-qaconv* Both datasets have been divided into training, development, and testing sets in an 8:1:1 ratio. The data sample of the emphkd-qaconv dataset can be found in Section efsec:appendix, while detailed data statistics are presented in Table eftab:dataset. It is worth noting that the conversations in the emphant-qaconv dataset are not strictly structured as a question-answer format, and therefore, we consider one utterance as one turn for statistics of conversation turns. Compared to existing datasets, our proposed datasets offer a wider range of answer types, with larger mean and standard deviation of answer length, making them more challenging. ## 4 Method 4.1 Task Formulation We define dataset as D = (Cm, qm, am)M m=1, the dialogue including k turns is represented as Cm = (Sm,1, Um,1),(Sm,2, Um,2), ...,(Sm,k, Um,k), where Sm,i and Um,i represents the ith speaker and ![3_image_0.png](3_image_0.png) utterance of the mth dialogue respectively. For a given dialogue C and question q, the model needs to predict the corresponding answer a, which could be a no-answer, single-span or multi-span answer. The task can be formulated as p(a|*C, q*). ## 4.2 Model To better model the multiple types of answers, based on the span-based model, we apply the additional tag-based multi-span module and answer types classification modules. As shown in Figure 3, the model consists of the following components: Transformer Encoder: The pre-trained transformer encoder aims to model the contextual feature representations of the question and dialogue. The input sequence of the encoder contains the question q and the flattened dialogue C, represented as X = [[CLS]; q; [SEP]; C], where X is the input token sequence, and C is the concatenation of each utterance Uk and corresponding speaker-id Sk of the kth utterance. Answer types classification: The encoder representation H[CLS] of token [CLS] is treated as the dialogue level feature, which is used to predict the answerability and the multi-span classification by: $$p^{a}=s i g m o i d(M L P^{a}(H_{[C L S]}))\qquad(1)$$ $$p^{m}=s i g m o i d(M L P^{m}(H_{[C L S]}))\qquad(2)$$ where MLP are multi-layer feed-forward networks, p ais the prediction score of answerability, which means whether the question can be answered. p m is the probability of whether the answer contains multiple spans. The loss function can be defined as follow, where y a, y m are the ground truths of answerability and multi-span classification, respectively: $ L_{a}=-y^{a}\cdot log(p^{a})-(1-y^{a})\cdot log(1-p^{a})$ (3) $ L_{m}=-y^{m}\cdot log(p^{m})-(1-y^{m})\cdot log(1-p^{m})$ (4) ... Answer Prediction: The answer prediction task is to predict the answer text from dialogue context, including the single-span prediction and multi-span prediction. HU = (H11, ..., Hij *, ..., H*kn) is a sequence of contextualized representations for all utterance tokens. The single-span prediction task is to compute the probability of each token being the start or the end of an answer span. Formally, feed-forward networks are used to calculate a score for each token, then a softmax function computes the start and end probability distributions along all tokens in this sequence: $$\begin{array}{l c r}{{p^{s}=s o f t m a x(M L P^{s}(H_{U}))}}&{{}}&{{}}&{{({\bf5})}}\\ {{}}&{{}}&{{}}&{{}}\\ {{p^{e}=s o f t m a x(M L P^{e}(H_{U}))}}&{{}}&{{}}&{{({\bf6})}}\end{array}$$ The loss of single-span prediction can be defined as follows, where ys and ye mean the labeled start and end position of the answer respectively: $$L_{s p a n}=-\frac{1}{2}(l o g(p_{y_{s}}^{s})+l o g(p_{y_{e}}^{e}))\qquad(7)$$ The multi-span prediction task extracts a variable number of spans from the input dialogue utterances. We followed the method proposed by Segal et al. (2020), casting the task as a sequence tagging problem with IO tag schema, predicting for each token whether it should be part of the span or not. The probability of the model assigns to token ij having labeled tag Tij is: p ij (Tij ) = sof tmax(MLPt(Hij )) (8) and the loss function can be defined as follows, where N is the total count of tokens, niis the count of tokens in ith utterance's : Ltag = − 1 N X k i=1 Xni j=1 log(p ij (Tij )) (9) ## 4.3 Key Utterance Selection Since each answer span is a subsequence of utterance, modeling the key utterance selection(KUS) can help locate the answer spans. Formally, Hs = (H[s1], ...H[si]*..., H*[sk])is the sequence of representations of each utterance, which is the corresponding contextualized representations of the speaker-id tokens, the probability that the i-th utterance contains the answer can be calculated as follows: $$p^{[s_{i}]}=s i g m o i d(M L P^{K}(H_{[s_{i}]}))$$ The loss can be defined as: $$L_{KUS}=-\frac{1}{k}\sum_{i=1}^{k}(y_{[s_{i}]}\cdot log(p^{[s_{i}]})\tag{11}$$ $$+(1-y_{[s_{i}]})\cdot(1-log(p^{[s_{i}]}))$$ $y_{[s_{i}]}$ represents the label for utterance $[s_{i}]$, $y_{[s_{i}]}=1$ if it contains answer spans. $$(10)$$ We adopt a multi-task learning scheme, which has been proven to be an effective way to improve model performance in NLP-related works (Chen et al., 2021). λa, λm, λspan, λtag, λKUS are hyperparameters to control the weights of each task, and the model loss is defined as: model loss is defined as: $\begin{array}{l}{L}_{model}={\lambda}_{span}\cdot{L}_{span}+{\lambda}_{tag}\cdot{L}_{tag}\\ +{\lambda}_{a}\cdot{L}_{a}+{\lambda}_{m}\cdot{L}_{m}+{\lambda}_{KUS}\cdot{L}_{KUS}\end{array}$ (12) 229. ## 4.4 Continual Pre-Training The public pre-trained transformer models such as BERT (Devlin et al., 2019), RoBERTa(Liu et al., 2019) have demonstrated state-of-the-art(SOTA) performance in various NLP tasks. However, they are primarily trained on general domain textual corpus such as Wikipedia, News or Books, notably different from discourse structure, writing style and domain in customer service conversations. Therefore, we introduce the dialogue continual pretraining approach to help better model the dialogue structure. For *ant-qaconv*, we collect about one million unlabeled customer service dialogues from the online customer service chat log. For *kd-qaconv*, we download and process several publicly released Chinese dialogue data, including *Duconv* (Wu et al., 2019), *Douban* (Wu et al., 2017), *Ecommerce* (Zhang et al., 2018), *DuRecDial* (Liu et al., 2020) and *LCCC* (Wang et al., 2020). Considering the hierarchical structure of the dialogue, we design the following token and utterance level pre-training tasks. Masked language model(MLM): MLM is an essential task to achieve better contextualized representations. For BERT (Devlin et al., 2019), 15% of tokens are picked randomly. 80% of these tokens are replaced with "[MASK]", 10% are replaced with another random token, and 10% of the tokens are kept unchanged. For RoBERTa (Liu et al., 2019), we mask the whole word instead of the token (Cui et al., 2021). Then we compute the cross-entropy loss to predict the original token. Response selection: Response selection is a classic dialogue pre-training task, which can enhance the contextual understanding of dialogue (He et al., 2022). The positive example (with label l = 1) is obtained by concatenating C with its corresponding response r in the original conversation. For negative samples, we randomly sample a response r− from other dialogue. We feed the concatenated sequence of C and r into the transformer encoder and a binary classification head on the token [CLS]: $$p(l=1|C,r)=s i g m o i d(H_{[C L S]})\quad(13)$$ to classify whether r is the proper response for context C. The cross-entropy loss is defined as: $$\begin{array}{c}{{L_{R S}=-l o g(p(l=1|C,r)}}\\ {{-l o g(p(l=0|C,r^{-})}}\end{array}\qquad(14)$$ | symbols | tasks | weights | |-----------|------------------------------|-----------| | λm | answerability classification | 0.3 | | λm | multi-span classification | 0.3 | | λspan | span model | 0.6 | | λtag | tag model | 0.6 | | λKUS | key utterance selection | 0.3 | ## 5 Experiment 5.1 Experimental Setting Following the standard evaluation metrics in the QA community, we choose word-level F1 and Exact Match(EM) accuracy as metrics to measure the overlap of the prediction and the ground truth answer(Rajpurkar et al., 2016). As the answer is long and descriptive in *ant-qaconv*, EM is unsuitable, so we choose F1 as our primary metric. To test our method, we choose bert-base-chinese4and chineseroberta-wwm-ext5as our PLMs, which are widely used in Chinese NLP tasks. Due to the 512 positional embedding limit of RoBERTa and BERT, truncating inputs by removing overflowing tokens can result in loss of contextual information and samples. To address this issue, we utilize a sliding window mechanism to construct training samples. During prediction, we select the prediction with the answer position in the middle. In our experiments, the sliding window size is set to 128. For training, we use the Adam optimizercitekingma2014adam with default parameters and learning rates of 1e-5, and a batch size of 16 for 15 epochs. We select the best model based on F1 score on the development set and evaluate on the test set. The hyper-parameters setting is shown in Table 2. If we choose not to add some sub-tasks, we just set the weights to 0. Table 2: The weights of the loss for different sub-tasks ## 5.2 Experimental Results Table 3 shows our experimental results on *kdqaconv* and *ant-qaconv*. As shown in Table 4, we further analyze the model's F1 performance for different answer types in the *kd-qaconv* dataset. The tag-based models, such as *bert-tagger* and roberta-tagger, have been observed to perform well 4https://huggingface.co/bert-base-chinese 5https://huggingface.co/hfl/chinese-roberta-wwm-ext Table 3: Result on *ant-qaconv* and *kd-qaconv*, KUS refers to the key utterance selection task. Table 4: The models' F1 performances in single-span and multi-span cases in *kd-qaconv*. | Model | ant-qaconv | kd-qaconv | | | |----------------|--------------|-------------|-------|-------| | F1 | EM | F1 | EM | | | bert-tagger | 46.01 | 15.26 | 62.77 | 42.86 | | bert-span | 66.58 | 34.34 | 79.35 | 65.53 | | +multi-span | 67.04 | 36.55 | 82.33 | 69.98 | | +pre-training | 69.19 | 37.75 | 84.17 | 72.05 | | +KUS task | 69.74 | 37.55 | 85.11 | 73.71 | | roberta-tagger | 52.09 | 18.88 | 68.96 | 50.31 | | roberta-span | 67.19 | 36.75 | 82.54 | 69.15 | | +multi-span | 68.55 | 38.15 | 85.76 | 75.26 | | +pre-training | 71.25 | 39.16 | 86.31 | 76.29 | | +KUS task | 69.75 | 37.15 | 86.58 | 76.50 | in multi-span cases, but not as effectively in singlespan cases, which make up the majority of datasets. This may be due to the fact that the tag model needs to predict whether each token is part of the answer, which can be challenging to optimize, especially when the mean and variance of the length of answers in the datasets are large. In Table 4, the spanbased models, *bert-span* and *roberta-span*, were chosen as our base models owing to their superior overall performance. We then sequentially added proposed modules to further improve the model's performance: First, The tag-based multi-span module can help handle the multi-span answers, thus improving F1 by 0.5% in *ant-qaconv*, 3% in *kd-qaconv*. The latter improves more because there are more multi-span cases in the *kd-qaconv* dataset. Second, continual pre-training on *ant-qaconv* leads to improvements of 2.15% and 3.1% over BERT and RoBERT, respectively. The gain for kd-qaconv is 1.84% for BERT and 0.55% for RoBERTa. We suspect the improvements come from improved dialogue representation and domain adaptation. Pre-training improves *ant-qaconv* even | Model | All | Single span | Multi span | |---------------|-------|---------------|--------------| | bert-tagger | 62.77 | 59.99 | 79.74 | | bert-span | 79.35 | 86.33 | 55.03 | | +multi-span | 82.33 | 85.75 | 74.72 | | +pre-training | 84.17 | 88.33 | 76.46 | | +KUS task | 85.11 | 89.79 | 74.89 | more because there is a more significant gap between internal customer service dialog and general domain text corpus used by the original PLMs. So the domain knowledge and complicated dialogue structure introduced by continual pre-training can be of incredible help for downstream QA tasks. Third, the key utterance selection(KUS) task improves the model accuracy in most places except for RoBERTa on *ant-qaconv*, probably because it can help the model to identify which turn contains information to answer a given question. In all, our method is effective on both datasets and with both BERT and RoBERTa as PLMs. ## 5.3 Application In Knowledge Production We have deployed the proposed model in realworld knowledge production for the FAQ database used by *Alipay*'s online customer service chatbot in the following workflow: 1) A cluster module and a classifier process the chatbot log data to identify incorrectly answered user questions; 2) A retrieval model based on BM25 and Simcse (Gao et al., 2021) retrieves the most relevant dialogues from human customer service data; 3) The proposed QA model extracts the answers for user questions from the retrieved dialogues; 4) human operators decide whether to adopt the QA pairs into the FAQ database and they can also refine them. Online Evaluation: We choose the adoption rate as our end-to-end accuracy. Based on the statistics of three months' data after the system deployment, the overall adoption rate is about 65%. We sample and analyze the QA pairs that are not adopted, only 8% of which are caused by inaccurate and incomplete extraction of the extraction model. The rest are caused by: retrieval mistakes, some queries or answers that are unclear or not suitable as the content of the FAQ database, etc. In the future, we will jointly optimize the knowledge production pipeline for better performance. And we also choose knowledge production efficiency as our metric. The average time cost of producing a QA pair is reduced from 10 minutes to 2 minutes. When knowledge operators produce QA pairs directly, they must summarize answers from chat logs and related documents. However, with the assistance of our deployed system, they are only required to review and refine the recommended answers. ## 6 Conclusion We design a knowledge production system including QA on conversations to help mine answers from human customer service dialogues. Based on the span-based extraction model, we add a multi-span extraction module trained with multi-task learning and continual pre-training schemes to extract incontiguous answers from conversational contexts. Experimental results show our approach outperforms the baseline models on both *ant-qaconv* and kd-qaconv datasets, the latter of which will be publicly released. Finally, the proposed method has been deployed to support the *Alipay*'s customer service chatbot system, which significantly saves the time cost of human operators' producing new QA pairs. ## 7 Ethical Considerations We present the following ethical considerations for data authorization, privacy, and deployments. - We have obtained explicit permissions from the customer to collect and utilize customer service dialog data. - We use desensitization tools to remove sensitive information in customer service conversations. Only a few annotators can access this data, and they will check again to ensure that there is no user personal information in the dataset. At the same time, the dataset *antqaconv* is only used for internal research. - The QA pairs produced by the knowledge production system will be checked by human operators to make sure private message is removed. ## 8 Acknowledgments Ant Group is China's leading technology and financial services company that provides a range of digital financial solutions such as online payment and wealth management. The authors would like to thank Changlin Yang, Siye Liu, Sen Hu, Wangshu Zhang, Teng Xu, Jing Zhen, and other members of Ant Group for helpful discussions and comments. We would also like to thank reviewers for their valuable comments. ## References Shijie Chen, Yu Zhang, and Qiang Yang. 2021. Multitask learning in natural language processing: An overview. *arXiv preprint arXiv:2109.09138*. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for nlp. In *Findings of the Association* for Computational Linguistics: ACL-IJCNLP 2021, pages 968–988. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. *arXiv preprint arXiv:2104.08821*. Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: Adapt language models to domains and tasks. Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, et al. 2022. Galaxy: A generative pre-trained model for task-oriented dialog with semisupervised learning and explicit policy injection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10749–10757. Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, et al. 2018. Dureader: a chinese machine reading comprehension dataset from real-world applications. In *Proceedings of the Workshop on Machine Reading for Question Answering*, pages 37–46. Lynette Hirschman and Robert Gaizauskas. 2001. Natural language question answering: the view from here. natural language engineering, 7(4):275–300. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. *arXiv preprint* arXiv:1909.11942. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Changmao Li and Jinho D Choi. 2020. Transformers to learn hierarchical contexts in multiparty dialogue for span-based question answering. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5709–5714. Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. In *Proceedings of the 28th International Conference on Computational Linguistics*, pages 2642–2652. Yiyang Li and Hai Zhao. 2021. Self-and pseudo-selfsupervised prediction of speaker and key-utterance for multi-party dialogue reading comprehension. arXiv preprint arXiv:2109.03772. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting Liu. 2020. Towards conversational recommendation over multi-type dialogs. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1036– 1049, Online. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. *arXiv preprint arXiv:1806.03822*. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. *Transactions of the Association for Computational Linguistics*, 7:249–266. Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. *Nist Special Publication Sp*, 109:109. Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. 2020. A simple and effective model for answering multi-span questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3074–3080. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020. A large-scale chinese short-text conversation dataset. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 91–103. Springer. Chien-Sheng Wu, Andrea Madotto, Wenhao Liu, Pascale Fung, and Caiming Xiong. 2022. Qaconv: Question answering on informative conversations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5389–5411. Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang. 2019. Proactive human-machine conversation with explicit conversation goal. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3794–3804. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrievalbased chatbots. In *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505. Zhengzhe Yang and Jinho D Choi. 2019. Friendsqa: Open-domain question answering on tv show transcripts. In *Proceedings of the 20th Annual SIGdial* Meeting on Discourse and Dialogue, pages 188–197. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018. Modeling multi-turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3740–3752. Hao Zhou, Chujie Zheng, Kaili Huang, Minlie Huang, and Xiaoyan Zhu. 2020. KdConv: A Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7098–7108, Online. Association for Computational Linguistics. ## A Appendix A.1 Data Samples As shown in Figure 4, we sample a conversation and corresponding questions and answers from *kdqaconv*. | Conversation(Film) | | | |--------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| | User1 | 你看过《教父》吗? Have you watched The Godfather? | | | User2 | 是的,这个电影获过第45届奥斯卡金像奖 最佳影片、第45届奥斯卡金像奖 最佳改编剧本。 Yes, this movie has won Best Picture at the 45th Academy Awards and Best Adapted Screenplay at the 45th Academy Awards. | | | User1 | 是的,还有第45届奥斯卡金像奖 最佳男主角奖。这个电影是哪年上映的啊? Yeah, and the 45th Academy Awards for Best Actor in a Leading Role. What year was this movie released? | | | User2 | 1972年03月24日上映的,这个电影是哪里制片的呢? It was released on March 24, 1972. Where was this film produced? | | | User1 | 是美国制片的,成本也用了不少钱吧? It was produced in the United States, and it cost a lot of money, right? | | | User2 | 是的,6,000,000美元。那票房如何呢? Yes, $6,000,000. How about the box office? | | | User1 | 截至1997年票房达到2亿4500万美元,你知道它的拍摄景点在哪吗? As of 1997, the box office reached 245 million US dollars. Do you know where it was filmed? | | | User2 | 这个不清楚了,你知道导演是谁吗? This is not clear, do you know who the director is? | | | User1 | 是马里奥·普佐,他的这部电影很成功啊,选的演员也很适合戏里的角色。 It's Mario Puzo. His movie is very successful, and the actors he chooses are also very suitable for the roles in the movie. | | | User2 | 是的阿尔·帕西诺就是,他出演了这个系列电影。 Yeah, so is Al Pacino, and he's in the series … | | | User1 | 这可不知道了,你知道他除了做演员和导演以外,还做什么吗? I don't know, do you know what else does he do besides being an actor and director? | | | User2 | 他也是名制片人,编剧。 He is also a producer and screenwriter. …… | | | Questions | Answers | Answer types | | 阿 尔 · 帕西诺是做什么的? | 演员和导演 制片人,编剧 | Multi-span | | What is Al Pacino's occupation? | Actor, director, producer and screenwriter | | | 你知道教父获得过哪些奖项吗 ? | 第45届奥斯卡金像奖 最佳影片、第45届奥斯卡金像奖 最佳改 | | | Do you know what awards The Godfather has | 编剧本 第45届奥斯卡金像奖 最佳男主角奖 | | | won? | Best Picture at the 45th Academy Awards, Best Adapted Screenplay at the 45th Academy Awards and the 45th Academy Awards for Best Actor in a Leading Role. | Multi-span | | 《教父》这个电影是什么时候上映的啊? | 1972年03月24日 | Single-span | | When was the movie "The Godfather" released? | March 24, 1972 | | | Figure 4: An example in kd-qaconv. The conversation is from kdconv(Zhou et al., 2020), and the questions and | | | Figure 4: An example in *kd-qaconv*. The conversation is from *kdconv*(Zhou et al., 2020), and the questions and answers are constructed with our data collection pipeline. *kd-qaconv* is a Chinese QA on conversation dataset, and we also translate the content to English for better understanding.
crisostomi-etal-2023-mitigating
Mitigating the Burden of Redundant Datasets via Batch-Wise Unique Samples and Frequency-Aware Losses
https://aclanthology.org/2023.acl-industry.23
Datasets used to train deep learning models in industrial settings often exhibit skewed distributions with some samples repeated a large number of times. This paper presents a simple yet effective solution to reduce the increased burden of repeated computation on redundant datasets. Our approach eliminates duplicates at the batch level, without altering the data distribution observed by the model, making it model-agnostic and easy to implement as a plug-and-play module. We also provide a mathematical expression to estimate the reduction in training time that our approach provides. Through empirical evidence, we show that our approach significantly reduces training times on various models across datasets with varying redundancy factors, without impacting their performance on the Named Entity Recognition task, both on publicly available datasets and in real industrial settings. In the latter, the approach speeds training by up to 87{\%}, and by 46{\%} on average, with a drop in model performance of 0.2{\%} relative at worst. We finally release a modular and reusable codebase to further advance research in this area.
# Mitigating The Burden Of Redundant Datasets Via Batch-Wise Unique Samples And Frequency-Aware Losses Donato Crisostomi∗ Amazon Alexa AI Sapienza, University of Rome crisostomi@di.uniroma1.it Andrea Caciolai∗ Amazon Alexa AI andccl@amazon.it Alessandro Pedrani Amazon Alexa AI pedrana@amazon.it Alessandro Manzotti Amazon Alexa AI manzotti@amazon.it Enrico Palumbo Amazon Alexa AI palumboe@amazon.it Kay Rottmann ![0_image_0.png](0_image_0.png) Amazon Alexa AI krrottm@amazon.de Davide Bernardi Amazon Alexa AI dvdbe@amazon.it ## Abstract Datasets used to train deep learning models in industrial settings often exhibit skewed distributions with some samples repeated a large number of times. This paper presents a simple yet effective solution to reduce the increased burden of repeated computation on redundant datasets. Our approach eliminates duplicates at the batch level, without altering the data distribution observed by the model, making it model-agnostic and easy to implement as a plug-and-play module. We also provide a mathematical expression to estimate the reduction in training time that our approach provides. Through empirical evidence, we show that our approach significantly reduces training times on various models across datasets with varying redundancy factors, without impacting their performance on the Named Entity Recognition task, both on publicly available datasets and in real industrial settings. In the latter, the approach speeds training by up to 87%, and by 46% on average, with a drop in model performance of 0.2% relative at worst. We finally release a modular and reusable codebase to further advance research in this area. ## 1 Introduction Deep neural networks have recently enabled impressive results across many fundamental tasks (Brown et al., 2020; Dosovitskiy et al., 2021). However, training state-of-the-art models is now a very demanding process, in terms of both time and resources (Strubell et al., 2019). The issue is exacerbated when models are required to train on datasets that naturally exhibit a redundant distribution. User queries are one such example: AOL (Pass et al., 2006) and MSN query logs (Zhang and Moffat, 2006) are composed for the 51.6% and 52.4% of duplicates, respectively. ∗Equal contribution. Figure 1: Effect of employing batch-wise unique samples. Consider a dataset with 8 samples of 4 distinct ![0_image_1.png](0_image_1.png) types (e.g. utterance text). Instead of inserting samples sequentially until the batch is full, by inserting only the first encountered sample per type (keeping track of the occurrences) the number of batches is decreased. A similar phenomenon can be observed in recommender systems data, in which most of the entries involve a relatively small set of popular items (Cremonesi et al., 2010). Finally, in large scale conversational assistant data, the vast majority of user interactions is composed of commands and queries that are frequently expressed with minimal or no variation. When training on this redundant data, there will be instances in which: (i) the training input is the same, and (ii) the model has the same weights; in these instances repeated computations will occur, increasing training times. In this paper, we propose a simple yet effective approach to reduce repeated computations during training by removing duplicates at the batch level, while accounting for frequency of the samples in the batch during the loss computation. This leaves the data distribution perceived by the model untouched and we empirically show that it leads to a model that has similar weights and has followed a similar trajectory in the parameter space during 235 training, compared to a model trained without any data deduplication. Our contribution is therefore four-fold: (i) we propose a novel online deduplication technique to mitigate the training burden over redundant datasets and provide rigorous mathematical reasoning backing its benefits; (ii) we show its effectiveness on both a real industrial datasets as well as artificially upsampled public datasets; (iii) we further provide a set of empirical analyses, including a study of the effect on the parameters' evolution and the difference in benefit when varying batch size; (iv) finally, we release a modular and extensible codebase1, implementing deduplicator classes, easily reusable by the community. Even though the methodology is agnostic to both task and model, we experimentally validate its strengths on NER as it constitutes a critical task for large-scale conversational assistants, on which we show the duplication problem to be relevant. We believe this work fills a gap in the current research landscape, since deduplication techniques for training data appear to be an under-explored research territory, but also an increasingly pressing need in industry. ## 2 Related Work The success of language models pre-trained on large corpora, such as BERT (Devlin et al., 2018), raised awareness on optimization of training times and costs within the NLP community. Strubell et al. (2019) showed that carbon footprint of NLP research is following a concerning trend and spurred researchers to prioritize the development of computationally efficient algorithms. Countless directions have been explored by the community, from distillation techniques (Hinton et al. (2015)) used to produce smaller models (Sanh et al. (2019)) up to efficient computation frameworks to improve distributed training (Song et al. (2023)). To the best of our knowledge, only a few works in the literature address the problem of optimising training in presence of duplicates in the data. Lee et al. (2021) show the benefits of completely removing duplicates when pre-training large language models. Ya-Guan et al. (2020) propose a way to improve accuracy focusing on mini-batches and performing undersampling and oversampling in order to balance classes. Faghri et al. (2020) mention the possibility of reducing computation time by removing all but one of the duplicates in a mini-batch, 1Available at github.com/amazon-science/unique-batches. although their work focuses on the optimal way to sample data to minimize gradient variance, and not on training time reduction. Similarly, Wang et al. (2016) propose a way to balance the training effort among batches, to improve Stochastic Gradient Descent (SGD) by minimizing gradient variance. Compared to these previous works, our approach focuses on reducing the repeated computation that occurs when training on redundant datasets, retaining model performance while being minimally invasive to the pre-existing setup. ## 3 Approach A deep learning model M(θ) is typically trained over a given dataset D with SGD (or one of its variants), estimating the gradient of a loss function with respect to the model weights θ, by iterating over the dataset in batches of fixed size. If D contains duplicates, then some of them might fall within the same batch, resulting in the same computations occurring multiple times. We propose to remove duplicates while building the batch, also accounting for the number of occurrences of the samples when computing the loss. In practice, first creating the batches and then removing the duplicates would result in smaller batch sizes and therefore in under-exploiting the parallel computation enabled by GPUs. Therefore, we keep the actual batch size the same, filling a batch with unique samples (ignoring repetitions when encountered, see fig. 1), but accounting for repetitions by multiplying the loss of a sample by its frequency in the batch. See appendix A.1 for the details of the procedure. By considering the number of repetitions of the samples, the size of the batches remains the same while virtually containing more samples, therefore reducing the required number of batches to iterate over the dataset and leading to training time reduction. We remark that the proposed technique does not make any assumption on the underlying task and model, but only affects the way data is loaded during training. Therefore, it is well suited for a production setting in which one wishes to reduce the burden of frequent re-training, while at the same time changing the setup as little as possible. Motivation Our methodology is grounded on the following intuitions: (i) frequent samples should be given a larger weight than rarer ones, as they are most frequently encountered in the production traffic, and (ii) the model should be able to see the duplicated samples multiple times per epoch, i.e. at different stages of the model parameters evolution. While (i) explains why we should deduplicate and take sample frequency into account in the loss computation, (ii) explains the need to do it at the batch level. In section 6 we provide empirical support to these observations. Training time reduction The reduction in training time comes from a direct reduction in the number of batches, as each batch virtually contains more samples than its actual batch size. We can define the virtual batch size Bvirtual of batch b containing B unique objects oi as $$B^{\mathrm{virtual}}=\sum_{i=1}^{B}{\mathrm{occurrences}}(o_{i},b).\qquad(1)$$ Let Ndup be the number of duplicates in batch b, once it has been filled with B unique samples, then Bvirtual = B + Ndup. The number of duplicates in a batch is a random quantity that depends on the dataset distribution and the batch size. As the randomness only regards Ndup, we have $$\mathbb{E}\left\{B^{\mathrm{virtual}}\right\}=B+\mathbb{E}\{N^{\mathrm{dup}}\}.$$ The expected relative increase in batch size is then Binc = E Bvirtual /B. Let N be the number of training samples. Let M = NB be the number of batches resulting from iterating over the samples with batch size B. Finally, let M′ be the number of batches when using batch size E Bvirtual . By definition, we have $$\begin{array}{l}\mathbb{E}\left\{M^{\prime}\right\}=\left[\frac{N}{\mathbb{E}\left\{B^{\text{virtual}}\right\}}\right]\\ \text{}=\left[\frac{N}{BB^{\text{inc}}}\right]=\left[\frac{M}{B^{\text{inc}}}\right].\end{array}\tag{3}$$ If we assume that a mini-batch of size B can be processed in parallel, the time complexity of a pass over N samples to optimize d parameters is (Bottou and Bousquet, 2007) O( Nd B ) = O(M d). Our approach introduces an O(N) step, while at the same time reducing the time complexity to O(M′d) = O( Nd BBinc ). This expected reduction in training time complexity simply reflects the expected reduction in number of batches computed above. This leads the overall complexity to $${\mathcal{O}}\left(N+{\frac{N d}{B B^{\mathrm{inc}}}}\right)={\mathcal{O}}\left({\frac{N d}{B}}\left({\frac{B}{d}}+{\frac{1}{B^{\mathrm{inc}}}}\right)\right)\tag{4}$$ Since usually the number of parameters to optimize is much larger than the batch size, we can assume B ≪ d from which it follows B d ≈ 0, hence the overall complexity is $$={\mathcal{O}}\left({\frac{1}{B^{\mathrm{inc}}}}{\frac{N d}{B}}\right)$$ $$(5)$$ (5) meaning the expected reduction in training time is proportional to the expected relative increase in batch size. Accounting for duplicates Removing duplicates in the batch also removes the influence of repeated samples, hence removing the larger contribution to the loss of more frequent samples. This leads to different gradients and therefore different training evolution. To counter this, the contribution to the loss of each sample oiis re-weighted by frequency($\omega_{i},b$) = occurrences($\omega_{i},b$) B virtual $$(6)$$ Bvirtual (6) thus resulting in the same loss signal we would have when using the virtual batch size instead. We delve into more details on the effect of including the frequency signal in the loss in section 6. $$(2)$$ ## 4 Boost Estimation Estimating the expected increase in virtual batch size is important for two reasons. First, the virtual batch size allows us to compute the expected reduction in the number of batches required to iterate over the dataset. This in turn provides an estimate for the reduction in training time (see eq. (3)) without the need to actually run any training. Second, it is common practice to scale the learning rate when increasing the batch size (Krizhevsky, 2014; Goyal et al., 2017), thus having an estimate of the virtual batch size helps correcting the learning rate. This is discussed more in detail in section 7. To the best of our knowledge there is no closed form solution for E Bvirtual in eq. (2). Therefore we derive a solution for the number of duplicates d in a batch b of size n, and then invert the formula to compute the expected number of unique samples in b as u = n − d. This way a numerical solution to the original problem can be found by iterating over the possible values of n = 1*, . . . , N*, up to the one that yields u = B unique samples. In the following we introduce the formula for d and leave the details of its derivation in appendix A.2. Consider a dataset $D=\{x_{i}\mid x_{i}\in C,i=1,...,N\}$ $$237$$ ![3_image_0.png](3_image_0.png) of N samples, some of which may be duplicates, taken from a collection C = {o1*, . . . , o*C} of C distinct objects. Let k1*, . . . , k*C denote the number of occurrences in D, such that k1 + *· · ·* + kC = N. Then, we can show that: $$d=\sum_{i=1}^{C}\left(n{\frac{k_{i}}{N}}-1+{\frac{\binom{N-k_{i}}{n}}{\binom{N}{n}}}\right).\qquad(8)$$ Effect of sampling strategy The estimate in eq. (8) assumes batches are formed by sampling uniformly at random from the dataset. However, NLP practitioners often rely on sampling strategies that optimize memory consumption, such as Bucketing by Sequence Length (Khomenko et al., 2016). Samples are distributed in *buckets* based on their length, and then batches are formed sampling uniformly at random from these. In such a scenario, the boost estimation in eq. (8) still holds, but on each bucket individually. The overall expected number of duplicates can be computed as a weighted average of the estimates on each bucket, weighted by the bucket size. We highlight that if samples are grouped in buckets by length, then all of the duplicates of one sample will fall in the same bucket. This has the effect of increasing the number of expected duplicates in each batch (N in eq. (8) is smaller), leading to larger training time reduction in realistic scenarios that use bucketing. ## 5 Experimental Setting The proposed approach is tested on the task of NER (Tjong Kim Sang and De Meulder, 2003). Two samples (also referred to as utterances) are considered duplicates if they share the same *annotation*, i.e. word-level tokens and corresponding ground truth NER labels in the dataset. The experiments are performed on datasets used for training a large-scale conversational assistant (referred to as *internal* in the following) and also on publicly available data. The internal datasets comprise live traffic utterances, de-identified for privacy regulations and annotated to enable supervised training of deep learning models for solving the NER task. The datasets exhibit varying degree of skewness in their redundancy, and we refer to them as InternalMS, InternalVS, InternalXS, meaning mildly skewed, *very skewed* and *extremely* skewed, respectively. Given the artificial deduplication of manually curated datasets, we upsample a public dataset to mimic the redundancy observed on internal data. The *MITRestaurant* dataset (Liu et al., 2013), consisting of restaurant-related queries, is chosen for the semantic similarity of its queries to the utterances found in the internal datasets. From this dataset, upsampling is performed to arrive at three datasets with redundancy ratios of r1 = 0.5, r2 = 0.7 and r3 = 0.9. We refer to these artificially upsampled public datasets as MITMS, MITVS, MITXS, with the same meaning of the acronyms. Given the redundancy ratio r of interest for each of the three datasets, these are generated as follows. First, the number of duplicates to draw is computed, according to the above probability distribution (eq. (8)), as d =r 1−r n. Then, to generate a realistic distribution, the fact that shorter utterances are more frequent (Borbély and Kornai, 2019) and thus more likely to be duplicated is leveraged. In a conversational assistant, for instance, we expect to observe more frequently short utterances such as "yes" or "stop" than long sentences related to some more specific queries. This heuristic is implemented by sampling the d duplicates according to a power-law over the length of the utterance in characters, for which an utterance ui with length liis sampled with probability p(ui) = 1 (li)α . The exponent of the power-law α is set to 3 for MITMS and MITVS, while it is set to 5 for MITXS. See table 2 for more statistics on these artificially upsampled datasets. We compare the proposed approach with various baselines (see table 1), which either deduplicate at the batch level or at the dataset level, can either account for the number of repetitions during the loss computation or ignore them, and keep one or more copies per sample. The comparison is drawn in terms of training steps and time, as well as performance of the trained model in terms of micro- | Dataset | Size | Redundancy | # Named Entities | |---------------|--------|--------------|--------------------| | MITRestaurant | 9180 | - | 17 | | MITMS | 18360 | 0.5 | | | MITVS | 30600 | 0.7 | | | MITXS | 91801 | 0.9 | | F1 score, considering averages across 5 seeds. All the deduplication methods are applied to the training of a model with the same architecture, across all experiments. In particular, we employ a simple NER architecture that obtains contextual word embeddings from a pre-trained DistilBERT encoder (Sanh et al., 2019). The embeddings are then fed to a Multi-Layer Perceptron (MLP) to map into the label space. The models are trained to convergence with early stopping. We remark that what changes across the experiments is the deduplicator and not the model that is employed. The latter is in fact defined by the same architecture and hyperparameters, except for the learning rate that is adjusted as described in section 7 to account for the difference in virtual batch size. Refer to appendix A.4 for further details. ## 6 Experimental Results Results on internal data Table 3 reports the results on the three internal datasets. We can see how the only approach that reduces training times while also keeping model performance intact is removing duplicates at the batch level: training times are significantly reduced (−46.5% on average, −87% at best) and the model evaluation metric is almost on par (−0.1% on average, −0.2% at worst). On the other hand, naively removing duplicates at the dataset level is suboptimal: the strategy is consistently the best approach in terms of reduction in training time (−91.3% on average, −97% at best), but also the worst in terms of F1 score of the resulting model (−3.1% on average, −5.8% at worst). Finally, only keeping a logarithmic portion of the duplicates at the dataset level allows to reduce the training times less (−65.2% on average, −84% at best), but with a milder worsening of model performance (−0.3% on average, −0.5% at worst). We observe that introducing the sample weighting term in the loss when deduplicating batch-wise (BWU) does not seem to result in a significant improvement over the un-weighted variant (BU). We ![4_image_0.png](4_image_0.png) hypothesize this to be a consequence of leaving the pre-trained BERT encoder frozen during training. To test this hypothesis, we experiment also on a simpler LSTM-based model (Hochreiter and Schmidhuber, 1997), without pre-training (see table 7) and find that the weighted variant (BWU) is on average 22.8% faster than the un-weighted variant (BU) since it leads to faster convergence. Finally, it is worth mentioning that, if keeping perfectly on-par model performance is not critical, simpler deduplicators (e.g. DL) may achieve better training time reduction. Results on public data Table 4 reports the results on public data, where we observe similar patterns to the ones on internal data. Again, the only approach consistently reducing training times while also keeping model performance on par is BWU: it leads to a training time reduction of −23% on average, and −61.1% at best, while exhibiting no model performance drop on average, and −0.1% at worst. We observe again similar results between BWU and BU, and carry out the same test on a simpler LSTM also on public data (see table 8), with similar results. Removing duplicates at the dataset level is still the best approach in terms of time reduction (−32% on average, −68.7% at best), but at the cost of worst model performance decrease (−9.2% on average, −21.2% at worst). The DL deduplicator is less competitive on public data, with an average and best time reduction of −32.2% and −68.7%, respectively, and a model performance decrease of −0.83% on average and −1.4% at worst. Parameters evolution We hypothesize that including the frequency information in the loss, and thus having an almost identical loss signal to the non-deduplication baseline, leads to models having similar weights. To test this hypothesis, we retain parameter vectors during training with all the dedu- ![5_image_0.png](5_image_0.png) BS = 512 BS = 1024 MITMS MITVS MITXS MITMS MITVS MITXS Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ ![5_image_1.png](5_image_1.png) ![5_image_2.png](5_image_2.png) ![5_image_3.png](5_image_3.png) ![5_image_6.png](5_image_6.png) ![5_image_7.png](5_image_7.png) Base - – - – - – - – - – - – - – - – - – DU -34.0% -31.0% -0.7% -83.0% -83.0% -0.6% -87.0% -87.0% -0.2% -54.0% -53.0% -0.9% -75.0% -73.0% -0.3% -83.0% -82.0% -0.2% DWU -86.0% -**86.0%** -5.8% -93.0% **-93.0%** -1.9% -97.0% **-97.0%** -2.1% -88.0% **-87.0%** -5.3% -91.0% **-91.0%** -2.2% -94.0% **-94.0%** -1.3% DL -32.0% -30.0% -0.5% -75.0% -74.0% -0.2% -83.0% -83.0% -0.2% -47.0% -45.0% -0.4% -76.0% -75.0% -0.3% -84.0% -84.0% -0.3% BU +11.0% +15.0% -0.0% -58.0% -57.0% -0.2% -73.0% -70.0% **-0.1%** -18.0% -16.0% -0.1% -34.0% -29.0% -0.2% -75.0% -72.0% **-0.1%** BWU -3.0% +0.0% **+0.1%** -52.0% -50.0% **-0.1%** -75.0% -72.0% **-0.1%** -26.0% -23.0% **+0.0%** -51.0% -47.0% **+0.1%** -89.0% -87.0% **-0.1%** Table 3: Comparison of the deduplicators on the internal datasets. Highlighting in bold best results column-wise. While BWU is not the best technique in terms of training steps and time alone, it is the only one that can reduce training time while maintaining model performance. deduplicator BS = 512 BS = 1024 ExternalMS ExternalVS ExternalXS ExternalMS ExternalVS ExternalXS Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Base 696 99.2s 0.915 1046 136s 0.929 1987 224.6 0.955 450 124.6s 0.914 700 178.4s 0.927 1800 394.8s 0.956 DU -32.0% -11.5% -1.1% -51.2% -30.4% -2.1% -75.8% -55.8% -6.5% -46.6% **-30.0%** -1.4% -61.4% -47.8% -3.0% -86.7% -75.2% -8.8% DWU -39.8% **-21.2**% -3.1% -52.9% **-32.5%** -2.9% -85.2% **-72.3%** -11.9% -48.2% **-30.0%** -4.4% -65.6% **-51.9%** -11.9% -86.8% **-75.6%** -21.2% DL -26.2% -4.4% -0.3% -36.9% -16.5% -0.4% -66.9% -46.0% -0.9% -40.0% -21.0% -0.7% -52.9% -36.6% -1.3% -80.0% -68.7% -1.4% BU -16.7% -5.4% -0.1% -18.4% -8.7% **-0.1%** -45.6% -31.4% **+0.3%** -20.0% -8.7% -0.1% -31.4% -20.3% -0.1% -70.8% -61.7% -0.2% BWU -5.6% +9.1% **+0.2%** -27.7% -17.5% **-0.1%** -53.8% -40.6% +0.0% -20.0% -8.3% **+0.0%** -31.4% -19.3% **+0.0%** -70.4% -61.1% **-0.1%** plicators; looking at table 5 we can see how indeed the BWU deduplicator trains a model that is the closest (in parameter space) to the model trained without deduplication, among the tested deduplicators. Furthermore, we find that this behavior is the result of a stronger property of our approach. Let us define the distance between trajectories in parameter space as $$d_{t r a j e c t o r y}(\mathbf{A},\mathbf{B})={\frac{1}{n}}\sum_{i=1}^{n}\left\|\mathbf{A}_{i}-\mathbf{B}_{i}\right\|_{2}\quad(9)$$ where A, B ∈ R n×dare matrices of n parameter vectors of dimension d, and ∥·∥2 is the Euclidean norm. Essentially, we are considering the average point-wise Euclidean distance between corresponding pairs of points along the two trajectories in parameter space. Then, our finding is that employing the BWU deduplicator during training leads to a trajectory in parameter space that is the closest one to that of a model trained without any deduplication (see again table 5). One can qualitatively appreciate this property by resorting to t-SNE (van der Maaten and Hinton, 2008) to project the d-dimensional vectors down to 2D, and visualizing the trajectories in parameter space, as can be seen in fig. 3. ![5_image_8.png](5_image_8.png) | deduplicator | df inal ↓ | dtrajectory ↓ | |----------------|-------------|-----------------| | DU | 56.8 | 40.4 | | DWU | 74.1 | 52.6 | | DL | 49.9 | 35.15 | | BU | 48.6 | 32.3 | | BWU | 45.5 | 29.7 | ![5_image_4.png](5_image_4.png) ![5_image_5.png](5_image_5.png) ## 7 Discussion Influence of batch size As can be seen in Figure 2, the expected number of duplicates increases with the batch size. Intuitively, given a finite set of elements that contains repetitions, the larger a sample, the more likely it is to have repetitions. The plot clearly shows how the greatest increase in the number of duplicates is obtained when passing from small batch sizes to over 1000. The maximum possible reduction in number of batches would trivially be obtained when the batch size is equal to the total number of samples, when the reduction would be exactly the redundancy ratio of the dataset. Learning rate It is advisable to scale the learning rate when increasing the batch size, as the gradient computed over a larger batch size is a more accurate estimate of the real gradient and should therefore provide a more reliable direction. Small batches naturally provide more chaotic gradients and therefore a large learning rate may result in "overshooting" and missing the minima. A common approach is to increase the learning rate as the square root of the batch size increase factor (Krizhevsky, 2014), or to scale it linearly with the batch size (Goyal et al., 2017). We adopt the latter in this work, multiplying the learning rate by the expected increase in batch size computed as in section 3. ## 8 Conclusions In this paper, we address the problem of reducing training time when using redundant datasets, proposing a novel approach agnostic to task and model that results in a significant time reduction without waiving performance. The approach is compared with various deduplication methods on the task of Named Entity Recognition, on both industrial and public datasets. The comparison shows that our approach is the only one to significantly reduce training time while maintaining the same model performance metrics, with observed boosts in training time of 23%, 47% and 87% on datasets used to train models for a large-scale conversational assistant. Various analysis are conducted, on the tradeoff with batch size and on how learning trajectories are modified. We also provide a theoretically sound procedure to estimate the expected reduction, allowing practitioners to assess the benefits before employing the method. Finally, a modular and reusable codebase is released to foster further research in the area. We believe that this approach can have a high impact on the industry, where large, expensive models are often trained on datasets containing redundant user queries or items. This reduction in training times may in fact allow for faster experimental iterations while cutting times and costs, also reducing the carbon footprint of deep learning models. As future work we plan to (i) leverage the generality of the approach on other tasks that exhibit high redundancy in data, like semantic search and (ii) study a more relaxed definition of equality between two samples, to allow considering two samples the same if they only differ up to a tolerated quantity. ## Limitations An important limitation of our contribution lies in breadth of the experimental validation. We decided to focus our experimentation on the setup where we encountered the issue of redundant datasets: NER for a large-scale conversational assistant. While it is true that our approach does not make any assumption neither about the task nor about the model architecture, and while we also provide a rigorous proof to support the estimated gain in terms of training time, the extent to which our approach remains the best time-accuracy trade-off when other tasks are considered is not explored in this work. An additional limitation stems from the lack of absolute results on the internal datasets, as the latter can only be disclosed as relative improvements over a baseline due to internal policy. We attempt to mitigate this by reporting full results on an open dataset, but as we mention we have to resort to upsampling given the artificial deduplication that manually curated datasets incur before publishing. ## Ethics Statement The carbon footprint generated by the NLP community has shown a concerning trend in recent years. Similarly, the development and maintenance of production models using large neural networks in an industry setting has a non negligible negative impact on our planet. While our technique offers a significant advantage only in presence of duplicates in the training data, which might not always be the case, we see it as a small but tangible contribution towards a more sustainable research. Furthermore, a reduction in training times also directly translates to a reduction in costs, therefore works like ours also contribute to the democratization of language models development and their applications. Finally we note how our approach does not lead to the introduction of any new bias, since it leaves the data distribution observed by the model during training identical to the one of the initial dataset. ## Acknowledgements Donato Crisostomi is partially supported by the PRIN 2020 project n.2020TA3K9N. "LEGO.AI". ## References Gábor Borbély and András Kornai. 2019. Sentence length. In Proceedings of the 16th Meeting on the Mathematics of Language, pages 114–125, Toronto, Canada. Association for Computational Linguistics. Léon Bottou and Olivier Bousquet. 2007. The tradeoffs of large scale learning. *Advances in neural information processing systems*, 20. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys '10, page 39–46, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on* Learning Representations. X. G. Duan. 2021. Better understanding of the multivariate hypergeometric distribution with implications in design-based survey sampling. Fartash Faghri, David Duvenaud, David J. Fleet, and Jimmy Ba. 2020. A study of gradient variance in deep learning. *CoRR*, abs/2007.04532. WA Falcon. 2019. Pytorch lightning. Marco Ferrante and Nadia Frigo. 2012. On the expected number of different records in a random sample. *arXiv: Probability*. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch SGD: training imagenet in 1 hour. *CoRR*, abs/1706.02677. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. *Neural Computation*, 9(8):1735–1780. W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, J. K. Terry, Furong Huang, and Tom Goldstein. 2019. Understanding generalization through visualizations. Viacheslav Khomenko, Oleg Shyshkov, Olga Radyvonenko, and Kostiantyn Bokhan. 2016. Accelerating recurrent neural network training using sequence bucketing and multi-gpu data parallelization. In 2016 IEEE First International Conference on Data Stream Mining & Processing (DSMP), pages 100–103. IEEE. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Alex Krizhevsky. 2014. One weird trick for parallelizing convolutional neural networks. *CoRR*, abs/1404.5997. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better. *CoRR*, abs/2107.06499. Jingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386–8390. Greg Pass, Abdur Chowdhury, and Cayley Torgeson. 2006. A picture of search. In Proceedings of the 1st International Conference on Scalable Information Systems, InfoScale '06, page 1–es, New York, NY, USA. Association for Computing Machinery. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. David A Schum. 2001. The evidential foundations of probabilistic reasoning. Northwestern University Press. Jaeyong Song, Jinkyu Yim, Jaewon Jung, Hongsun Jang, Hyung-Jin Kim, Youngsok Kim, and Jinho Lee. 2023. Optimus-cc: Efficient large nlp model training with 3d parallelism aware communication compression. Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In *Proceedings of the 57th* Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142– 147. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579–2605. Linnan Wang, Yi Yang, Martin Renqiang Min, and Srimat T. Chakradhar. 2016. Accelerating deep neural network training with inconsistent stochastic gradient descent. *CoRR*, abs/1603.05544. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. Qian Ya-Guan, Ma Jun, Zhang Xi-Min, Pan Jun, Zhou Wu-Jie, Wu Shu-Hui, Yun Ben-Sheng, and Lei JingSheng. 2020. Emsgd: An improved learning algorithm of neural networks with imbalanced data. *IEEE* Access, 8:64086–64098. Omry Yadan. 2019. Hydra - a framework for elegantly configuring complex applications. Github. Yuye Zhang and Alistair Moffat. 2006. Some observations on user search behaviour. Aust. J. Intell. Inf. Process. Syst., 9:1–8. ## A Appendix A.1 Schedule Generation Algorithm 1 describes in pseudo-code the proposed procedure to generate a batching schedule that removes duplicates at the batch level. Notice that we consider two utterances identical when their *annotation*, i.e. both sequence of word-level tokens and corresponding NER labels, are identical. Procedure 1 Proposed deduplication approach Input: dataset *data*, batch size B Output: schedule for dataloader occurrences ← [] schedule ← [] Ib ← {} ▷ IDs seen in batch sample_posb ← {} ▷ sample pos in batch occb ← [] ▷ occurrences in batch C ← B ▷ capacity of current batch for all sample s in data do spos ← position of the sample sid ← unique id of the sample if sid ∈/ Ib **then** schedule += spos Ib += sid sample_posb [sid] ← len(occb) occb.append(1) C −= 1 else occb[sample_posb [sid]] += 1 end if if (C = 0) ∨ (s is the last sample) **then** occurrences.append(occb) C ← B occb ← [] sample_posb ← {} Ib ← {} end if end for return schedule, occurrences In practice, the schedule is then used to load samples during training, that are then fed to the model to obtain class scores z ∈ R N×C×L, with *N, L, C* the number of samples, the sequence length, and the number of classes, respectively. Then, the occurrences (normalized as relative frequencies fi) are integrated in the loss function as follows $$\mathcal{L}=-\sum_{i=1}^{N}f_{i}\cdot\frac{1}{L}\sum_{j=1}^{L}\log\frac{\exp(z_{n,j,y_{n}})}{\sum_{c=1}^{C}\exp(z_{n,j,c})}\tag{10}$$ ![9_image_0.png](9_image_0.png) where y ∈ R nis the vector of ground truth labels. ## A.2 Derivation Of Boost Estimate Aim of this section is to formally derive the formula for the expected virtual batch size reported in Equation (8) given the distribution of duplicates in the dataset, reported in section 4. Problem Formalisation Consider a dataset D = {x1*, . . . , x*N } with N objects, some of which might be repeated more than once. As described in section 3, we fill a batch b by sampling Bvirtual objects O = {xp1 , . . . , xpBvirtual } so that O contains B distinct elements. We want to compute the expected value E Bvirtual . This problem has some analogies with a generalized version of the coupon collector problem in which one wants to find how many samples with replacement are required to obtain a certain number of unique objects (Ferrante and Frigo, 2012), but in our case there is no replacement. To the best of our knowledge there is no closed form solution for this version of the problem, and therefore, we derive a solution for the expected number of duplicates d = Ndup in a batch b of size n. This way a numerical solution of the original problem can be found by iterating over the possible values of n = 1*, . . . , N*, up to the one that yields u = B unique samples. Estimate Derivation Consider now a dataset D of size N instead as a collection C = {o1*, . . . , o*C} of C distinct objects each associated with its number of occurrences k1*, . . . , k*C such that k1 + *· · ·* + kC = N. Consider a random sample without replacement X1*, . . . , X*n over D, with *n < N* representing the batch size. Now introduce C counting variables Z1*, . . . , Z*C, such that Zi counts occurrences of oiin the sample. Formally: $$Z_{i}=\sum_{j=1}^{n}\mathbb{1}(X_{j}=o_{i})$$ The counting variables introduced above follow, by definition, a multivariate hypergeometric distribution (Duan, 2021), characterized by probability mass function $$p_{Z_{1},\ldots,Z_{C}}(z_{1},\ldots,z_{C})={\frac{\prod_{i=1}^{C}{\binom{k_{i}}{z_{i}}}}{\binom{N}{n}}}$$ (12) and expected value $$\mathbb{E}(Z_{i})=n{\frac{k_{i}}{N}}.$$ . (13) We want to compute the expected number of duplicates in the sample d = E Ndup. The number of duplicates of oiin the sample is Zi − 1, since the first occurrence is not a duplicate; therefore in total we have: $$N^{\mathrm{dup}}=\sum_{i=1}^{C}\operatorname*{max}\left(Z_{i}-1,0\right).$$ By linearity of expectations, it holds that: $$\mathbb{E}\left(N^{\mathrm{dup}}\right)=d=\sum_{i=1}^{C}\mathbb{E}\left(\operatorname*{max}\left(Z_{i}-1,0\right)\right){\mathrm{~}}(15)$$ Let di = E (max (Zi − 1, 0)), then by the chain rule of expectations (also known as *Law Of The Unconscious Statistician* (Schum, 2001)) it follows: $$d_{i}=\sum_{j=0}^{k_{i}}\operatorname*{max}(j-1,0)p_{Z_{i}}(j)$$ where pZi (j) is the probability mass of Ziin j. Now, noticing that the first two contributions to the sum are 0, we have $$d_{i}=\sum_{j=2}^{k_{i}}\max\left(j-1,0\right)p_{Z_{i}}(j)$$ $$=\left[\mathbb{E}\left(Z_{i}\right)-p_{Z_{i}}(1)\right]-\left[1-p_{Z_{i}}(0)-p_{Z_{i}}(1)\right]\tag{16}$$ and substituting from eq. (13) we get $$d_{i}=n{\frac{k_{i}}{N}}+p_{Z_{i}}(0)-1.$$ (0) − 1. (17) $$(11)$$ The value pZi (0) denotes the probability of object oi having zero elements in the sample. Now, if the batch size is larger than the number of objects different from oi, i.e. k1 + *· · ·* + ki−1 + ki+1 + · · · + kC < B, then the sample will contain at least one occurrence of object oi, and in that case pZi (0) = 0. However in real-world scenarios this is quite unlikely, since usually B ≪ k1 + *· · ·* + ki−1 + ki+1 + *· · ·* + kC, therefore we have $$p_{Z_{i}}(0)={\frac{{\binom{N-k_{i}}{n}}}{{\binom{N}{n}}}}\qquad\qquad{\mathrm{(18)}}$$ $$(12)$$ and putting it all together, we get: $$d=\sum_{i=1}^{C}{\binom{n{\frac{k_{i}}{N}}-1+{\frac{{\binom{N-k_{i}}{n}}}{{\binom{N}{n}}}}}{}}.\qquad{\mathrm{(19)}}$$ $$(13)$$ As mentioned before, with this closed-form expression we now have a proxy to numerically compute the actual quantity of interest, that is E{Bvirtual} = n = u + d = B + Ndup. See algorithm 2 for the procedure to compute this numerical solution. $$(14)$$ Procedure 2 Estimated boost computation Input: dataset distribution k1*, . . . k*C, batch size $B$ **Output:** estimated boost $N\leftarrow\sum_{i=1}^{C}k_{1}$ $l\gets0$ $r\gets N$ **while**$r>l$**do**$\triangleright$ binary search $n=\left\lceil\frac{(r-l)}{2}\right\rceil$ Compute $d$ with eq. (19) **if**$n-d=B$**then** $r\leftarrow\frac{n}{B}$ **return**$\left(1-\frac{1}{r}\right)$ **else if**$n-d>B$**then** $r=n-1$ **else** $l=n+1$ **end if** **end while** ## A.3 Effect Of Frequency Distribution We remark that the reduction factor depends directly on the number of duplicates in the batch, and not on the overall redundancy factor. This can be appreciated in fig. 4. Three datasets with the same redundancy factor, but different skewness, $$(17)$$ 245 lead to differt virtual batch sizes. In fact, the distribution with most skewness in frequency results in the largest number of duplicates, as it is easier to draw repeatedly a very frequent object than doing so for one of the many different objects contributing equally to the total redundancy in the dataset. ## A.4 Implementation Details The experiments have been run on p3.2xlarge EC2 instances2, equipped with a NVIDIA Tesla V100 GPU3. As optimization framework, PyTorch (Paszke et al., 2019) has been used, along with PyTorch Lightning (Falcon, 2019) and Hydra (Yadan, 2019) for easier and faster development and experiment executions. Table 6 reports the hyperparameters used to train the models with the various deduplicators. We remark that the deduplicators themselves have no hyperparameters, therefore the table lists the hyperparameters of the models. We used pretrained distilled BERT (Sanh et al., 2019) models from HuggingFace (Wolf et al., 2019) as well as LSTM-based (Hochreiter and Schmidhuber, 1997) models trained from scratch. Both types of models feature a two-layer, fully-connected MLP mapping word embeddings into label-space. As for the BERT-based models, the encoder is a distilbert-base-cased4 for the English-only public dataset, while a distilbert-base-multilingual-cased5 has been used for the internal data. In both cases the encoder weights are not updated during training. Subword token-level embeddings are obtained by summing the hidden states of the last 3 layers of the encoder; then, average pooling is used to obtain word-level embeddings. As for the LSTM-based models, they use an embedding layer exploiting a simple word-level vocabulary (built considering all the corpus). The models are trained with the Adam (Kingma and Ba, 2014) optimizer to convergence with early stopping, monitoring the loss values on a heldout validation set. Learning rate is increased for the models trained with the Batchwise Weighted Unique deduplicator, as mentioned in section 5, to reflect the increase in (virtual) batch size. Note that this is not the case for the Batchwise Unique 2https://aws.amazon.com/ec2/instance-types/ p3/ 3https://www.nvidia.com/en-us/data-center/ v100/ 4https://huggingface.co/distilbert-base-cased 5https://huggingface.co/ distilbert-base-multilingual-cased deduplicator, since while both fill the batch with unique samples, ignoring duplicates, only the former has an impact on the training dynamics due to the introduction of the sample-wise weight in the loss. In fact, the latter neglects the duplicates' contribution to the loss, therefore does not lead to an actual increase in the batch size, while the former does, hence the need to increase the learning rate accordingly. ## A.5 Additional Results After observing that sample weighting does not result in a significant improvement against the unweighted batch-wise unique deduplicator, we investigate whether this is a flaw of the proposed deduplication technique or a consequence of using a pre-trained BERT encoder. As mentioned in Section 6, the latter turns out to be the case. In fact, training a simpler LSTM-based model from scratch, we get the results reported in Table 8. We can see how indeed the weighted variant is consistently on-par or superior to the un-weighted variant in terms of predictive performance, despite needing consistently less training steps, for both tested batch sizes. The effect becomes more noticeable as the redundancy and skewness in the dataset increases, as we would expect, with BU and BWU being almost comparable on the MS dataset with batch size 512, while the latter overcomes the former using almost half the training steps on the XS dataset with batch size 1024. While the margin between the BU and BWU deduplicator is not as close on internal data, as observed on the public data, we repeat the experiment on the LSTM-based model also on the former. The results are reported in Table 7 and are quite similar to the ones reported in Table 8 on the public dataset. | Parameter | Value | | |-------------------------|-----------------|-----| | BERT | LSTM | | | Learning Rate | 1e-3 | | | Optimizer | Adam | | | Max epochs | 30 | | | Embedding size | 768 | 50 | | Hidden size | 256 | | | Dropout | 0.5 | 0.2 | | Activation | ReLU | | | Validation split | 0.1 | | | Early stopping metric | Validation loss | | | Early stopping delta | 1e-4 | | | Early stopping patience | 3 | | | Clipping gradient norm | 10 | | Table 6: Hyperparameter values for the two types of deep neural network used in the experiments. | BS = 512 | BS = 1024 | | | | | | | | | | | | | | | | | | |----------------|-------------------------------------------------------------------------------------------------------------------------|----------------|------------|----------------|------------|----------------|------|----------------|---------------------|------|----|----|----|----|----|----|----|----| | deduplicator | InternalMS | InternalVS | InternalXS | InternalMS | InternalVS | InternalXS | | | | | | | | | | | | | | Steps ↓ Time ↓ | F1 ↑ | Steps ↓ Time ↓ | F1 ↑ | Steps ↓ Time ↓ | F1 ↑ | Steps ↓ Time ↓ | F1 ↑ | Steps ↓ Time ↓ | F1 ↑ Steps ↓ Time ↓ | F1 ↑ | | | | | | | | | | Base | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | | DU | -45.0% -44.0% -0.1% -77.0% -76.0% -0.1% -84.0% -83.0% -0.2% -45.0% -44.0% -0.2% -75.0% -75.0% -0.2% -81.0% -81.0% -0.2% | | | | | | | | | | | | | | | | | | | DWU | -61.0% -58.0% -3.6% -87.0% -85.0% -1.2% -92.0% -92.0% -0.8% -60.0% -56.0% -3.5% -85.0% -82.0% -1.1% -90.0% -89.0% -0.6% | | | | | | | | | | | | | | | | | | | DL | -36.0% -34.0% -0.1% -72.0% -72.0% -0.1% -84.0% -83.0% -0.1% -33.0% -31.0% -0.2% -71.0% -70.0% -0.1% -79.0% -79.0% -0.1% | | | | | | | | | | | | | | | | | | | BU | -3.0% +4.0% +0.0% -32.0% -21.0% +0.0% -69.0% -57.0% +0.0% +0.0% +13.0% +0.0% -42.0% -27.0% -0.0% -69.0% -50.0% -0.0% | | | | | | | | | | | | | | | | | | | BWU | -7.0% +4.0% +0.1% -37.0% -27.0% +0.0% -77.0% -67.0% -0.1% -19.0% -6.0% -0.0% -46.0% -28.0% -0.0% -80.0% -68.0% -0.0% | | | | | | | | | | | | | | | | | | Table 7: Comparison of the deduplicators on the internal datasets, when training a LSTM-based model from scratch. deduplicator BS = 512 BS = 1024 MITMS MITVS MITXS MITMS MITVS MITXS Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Base 504 16s 0.925 662 20s 0.953 1526 47s 0.974 354 15s 0.92 489 21s 0.952 979 43s 0.973 DU 387 12s 0.912 422 13s 0.939 441 14s 0.945 240 11s 0.9 240 11s 0.913 240 11s 0.903 DWU 297 10s 0.828 316 11s 0.867 377 13s 0.903 211 10s 0.821 227 11s 0.857 240 12s 0.888 DL 392 12s 0.92 475 15s 0.948 595 19s 0.969 270 12s 0.912 330 15s 0.942 360 16s 0.964 BU 451 15s 0.924 546 18s 0.951 655 26s 0.973 352 17s 0.919 437 22s 0.947 421 28s 0.971 BWU 398 14s 0.925 483 16s 0.954 403 16s 0.975 304 15s 0.922 340 17s 0.949 223 15s 0.974 Table 8: Comparison of the deduplicators on public datasets, when training a LSTM-based model from scratch. deduplicator BS = 512 BS = 1024 MITMS MITVS MITXS MITMS MITVS MITXS Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Steps ↓ Time ↓ F1 ↑ Base 696 99.2s 0.915 1046 136.0s 0.929 1987 224.6s 0.955 450 124.6s 0.914 700 178.4s 0.927 1800 394.8s 0.956 DU 473 87.8s 0.905 510 94.6s 0.91 480 99.2s 0.893 240 87.2s 0.901 270 93.2s 0.899 240 98.0s 0.872 DWU 419 78.2s 0.887 493 91.8s 0.902 294 62.2s 0.841 233 87.2s 0.874 241 85.8s 0.817 238 96.2s 0.753 DL 514 94.8s 0.912 660 113.6s 0.925 657 121.2s 0.946 270 97.2s 0.907 330 113.2s 0.915 360 123.6s 0.943 BU 580 93.8s 0.914 854 124.2s 0.928 1082 154.0s 0.958 360 113.8s 0.915 480 142.2s 0.926 525 151.2s 0.954 BWU 657 108.2s 0.917 756 112.2s 0.928 918 133.4s 0.955 360 114.2s 0.914 480 144.0s 0.927 532 153.6s 0.955 Table 9: Absolute results for the comparison of the deduplicators on the public datasets presented in table 4.
howell-etal-2023-distilled
The economic trade-offs of large language models: A case study
https://aclanthology.org/2023.acl-industry.24
Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. With their ability to handle large context windows, Large Language Models (LLMs) are a natural fit for this use case. However, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model{'}s utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM {---} prompt engineering, fine-tuning, and knowledge distillation {---} using feedback from the brand{'}s customer service agents. We find that the usability of a model{'}s responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space.
# The Economic Trade-Offs Of Large Language Models: A Case Study Kristen Howell, Gwen Christian, Pavel Fomitchov, Gitit Kehat, Julianne Marzulla, Leanne Rolston, Jadin Tredup, Ilana Zimmerman, Ethan Selfridge and **Joseph Bradley** LivePerson Inc., Seattle, Washington, U.S.A {khowell, gchristian, pfomitchov, jmarzulla, jtredup, izimmerman, eselfridge, jbradley}@liveperson.com {gitit.kehat, leannerolston}@gmail.com ## Abstract Contacting customer service via chat is a common practice. Because employing customer service agents is expensive, many companies are turning to NLP that assists human agents by auto-generating responses that can be used directly or with modifications. Large Language Models (LLMs) are a natural fit for this use case; however, their efficacy must be balanced with the cost of training and serving them. This paper assesses the practical cost and impact of LLMs for the enterprise as a function of the usefulness of the responses that they generate. We present a cost framework for evaluating an NLP model's utility for this use case and apply it to a single brand as a case study in the context of an existing agent assistance product. We compare three strategies for specializing an LLM - prompt engineering, fine-tuning, and knowledge distillation - using feedback from the brand's customer service agents. We find that the usability of a model's responses can make up for a large difference in inference cost for our case study brand, and we extrapolate our findings to the broader enterprise space. ## 1 Introduction Amidst increased automation, human agents continue to play an important role in providing excellent customer service. While many conversations are automated in text-based customer support, others are routed to human agents who can handle certain customer concerns more effectively. Agents often handle multiple conversations at once, consulting customer account information and brand policies while maintaining these conversations. As agents are expensive to staff, many companies are seeking ways to make their work more efficient. LivePerson's Conversation Assist,1illustrated in Figure 1, accelerates agents by automatically generating suggestions that the agent can either send, edit and then send, or ignore. Conversation Assist can both reduce agent response time and improve response quality, as a well-trained model may provide more consistent, higher quality responses than inexperienced agents or agents adversely impacted by external factors. These benefits lead to greater cost savings and increased customer satisfaction (CSAT) scores, not to mention providing a supervisory mechanism that is critical for brand control and model improvement. Large Language Models (LLMs) are a natural fit for this technology, as they have achieved high performance on response generation tasks (Adiwardana et al., 2020; Hosseini-Asl et al., 2020; Zhang et al., 2020, inter alia), but they are expensive to train and serve. For example, the inference cost for each response using a distilled GPT-2 model and an Nvidia A100 GPU is ¢.0011,2 while the inference cost using the GPT-3-based Davinci model through OpenAI's API is ¢1.10 (OpenAI, 2023b).3 LLM economics and enterprise applications are highly fluid. First, individual partnership agreements may differ from the published API cost, and the rapid pace of innovation in the space will necessarily impact the cost of training and serving these models. Second, as brands vary widely, a useful agent assistance model must be customized to the brand's use case and performance requirements. We propose a simple and flexible cost framework that can be applied to various LLM and brand scenarios. This framework, Expected Net Cost Savings (ENCS), combines the probability and cost savings of an agent accepting or editing a response with the cost of generating the response. ENCS can be applied at the message level or in the aggregate. With one brand as a case study, we explore ENCS with various methods of model customization. Using feedback from the brand's customer 1https://developers.liveperson.com/conversati on-assist-overview.html ![1_image_0.png](1_image_0.png) service agents, we evaluated fine-tuning, prompt engineering, and distillation to adapt and optimize GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020; OpenAI, 2023a), and Cohere (Cohere, 2023a). These strategies can lead to an agent usage rate of 83% (including both direct use and editing) and an annual cost savings of $60,000 for our case-study brand - 60% of their total agent budget. We generalize this case study to a broader range of brands and models. We find that low perplexity correlates with the probability that an agent will use a response, and we extrapolate from this finding to use perplexity to estimate the ENCS for additional model customization strategies. We apply ENCS to each configuration, and while models, prices, and use cases will change over time, we expect that this framework can be continuously leveraged for decision making as technology evolves. ## 2 Related Work Transformers (Vaswani et al., 2017) have dominated response generation tasks: DialogGPT (Zhang et al., 2020), Meena (Adiwardana et al., 2020), SOLOIST (Peng et al., 2021), BlenderBot (Roller et al., 2021), PLATO-XL (Bao et al., 2022), LaMDA (Thoppilan et al., 2022), GODEL (Peng et al., 2022). Each of these approaches fine-tunes a large pre-trained LM to task-oriented dialog or chit chat using curated dialogs. In some cases, additional tasks, such as the discriminative training tasks of Thoppilan et al. 2022, are also used. When data is not available for fine-tuning, prompting with a single example has proven quite effective (Min et al., 2022), and for large enough models, prompting that demonstrates breaking tasks into discrete components (Wei et al., 2022) has performed on par with fine-tuned models (Chowdhery et al., 2022). The size of these LLMs plays a significant role in their high performance (Chowdhery et al., 2022), but in a deployed setting, this size can be quite costly. Quantization (Whittaker and Raj, 2001; Shen et al., 2020), pruning (Han et al., 2015, 2016) and knowledge distillation (Hinton et al., 2015; Sanh et al., 2019) are common strategies for size reduction with minimal impact to performance. Here we focus specifically on distillation using a language modeling task to reduce model size while simultaneously adapting the model to the data following Ryu and Lee (2020) and Howell et al. (2022). Response generation is difficult to evaluate holistically. Some have focused on relevance and level of detail (Zhang et al., 2020; Adiwardana et al., 2020; Thoppilan et al., 2022), humanness (Zhang et al., 2020; Roller et al., 2021) and overall coherence or interestingness (Bao et al., 2022; Thoppilan et al., 2022). In contrast, we follow Thoppilan et al. (2022) and Peng et al. (2022) who consider helpfulness and usefulness as broader measures of response quality, but we ground these judgements in the customer service use case by having real agents judge the usefulness of model outputs. ## 3 Expected Net Cost Savings (Encs) ENCS combines model performance, model cost, and agent cost: If an agent saves time by using a model's response, then there is a cost savings. ![2_image_0.png](2_image_0.png) More formally, ENCS is defined as the probability that a response is used (P(U)) multiplied by the savings in dollars for each used response (SU ), less the cost of generating that response (C), as in (1). ## (1) Encs = P(U) ∗ Su − C Because agents are not limited to simply using a response as-is but may also choose to edit the response or ignore it altogether, equation (1) may be modified to account for the probability and savings associated with editing (P(E) and SE) or the cost of ignoring (P(I) and SI ) 4as well: $$\begin{array}{r l}{(2)}&{{}E N C S=((P(U)*S_{U})+(P(E)*S_{E})+}\\ {}&{{}(P(I)*S_{I})-C}\end{array}$$ We can estimate S from the agent's hourly rate (R), the average time it takes for agents to respond to a message without Conversation Assist (Tr), and the amount of time an agent spends for each accepted, edited, or ignored message (Tx). ## (3) Sx = R(Tr − Tx) Figure 2 provides a toy example of this calculation. ## 3.1 Simplifying Assumptions This model makes a number of simplifying assumptions. We assume that agents always have conversations to respond to or some other work to do. We exclude the problem of workforce optimization from our framework, noting that when fewer agents are needed to handle the conversational traffic, workforce can be reduced. We also exclude R&D cost, but return to this factor in section 5. 4In most cases, SI is a negative number, as reading a response and choosing not to use it would cost time and money. Furthermore, we omit any discussion of the cost of an agent using an inappropriate or factually incorrect response. For the purposes of this model, we assume that agents read all suggestions carefully, but a deeper analysis of the risk and cost of these errors is a critical area for further study. ## 4 Case Study We focus on a single brand to evaluate the use of LLMs for Conversation Assist and explore the application of ENCS for making product decisions. We evaluate three model customization strategies using manual ratings from brand agents. We then evaluate how well these ratings relate to perplexity and use this to assess a larger set of models. Finally, we estimate ENCS and discuss the implications. ## 4.1 Case Study Brand We partnered with a single brand, who we will refer to as Anonymous Retailer (AR), for this case study. AR's customer base includes both consumers and sellers who consign items through AR's platform. Because AR's agents are trained across different customer concern categories, they can provide expert feedback on a wide range of data. At the time of writing, AR has about 350 human agents who use LivePerson's chat platform. AR supports about 15,000 conversations per month, and uses chat bots for simple tasks and routing, while their human agents send 100,000 messages per month on average. In comparison, the average number of conversations per month for brands on LivePerson's platform is 34,000, with a median of 900 monthly conversations per brand and a standard deviation of 160. ## 4.2 Data Sets We constructed three datasets: brand-specific training, brand-specific test, and general training. We de-identified data, replacing each entity with a random replacement. For the test set, we manually ensured that the de-identification was internally consistent across the conversation for agent and consumer names, addresses, and order numbers. The brand-specific data comprises English customer service conversations from 2022 that include human agent and bot messages. We filtered these conversations to ensure that they had at least two agent turns, more human agent than bot messages, and a positive Meaningful Conversation Score.5 5For more information on Meaningful Conversation Score, | Fine-tuning | Distillation | 2nd Fine-tuning | | | | | | | | |--------------------------------|----------------|-------------------|----------|---------|---------|-----------|---------|---------|---------| | Model Name | Dataset | # Convs | # Steps | Dataset | # Convs | # Steps | Dataset | # Convs | # Steps | | GPT-2 BFT BD∗ | brand | 100,059 | 15,000 | brand | 100,059 | 67,014 | | | | | Cohere PE∗ GPT-3 PE∗ GPT-2 BFT | brand | 100,059 | 15,000 | | | | | | | | GPT-2 BFT BF BFT | brand | 100,059 | 34,000 | brand | 100,059 | 67,014 | brand | 100,059 | 15,000 | | GPT-2 GFT BD BFT | general | 236,769 | 34,000 | brand | 100,059 | 67,014 | brand | 100,059 | 28,000 | | GPT-2 GFT GD BFT | general | 236,769 | 34,000 | general | 236,769 | 1,264,352 | brand | 100,059 | 28,000 | | GPT-2 GPT-2 XL GFT GD BFT | general | 236,769 | 120,000 | general | 236,769 | 1,264,352 | brand | 100,059 | | | Cohere FT | brand | 50 | | | | | | | | | GPT-3 BFT | brand | 50 | 4 epochs | | | | | | | From this filtered data, we randomly sampled 100,059 conversations to make up our training set. From the remainder, we curated a brand-specific test set by manually selecting 287 conversations where the customer's goal could be clearly established from the context of the conversation. We constructed the general training set from five additional retail brands whose product lines fall into similar categories as AR. We filtered and processed the data using the method described above and selected 70,000 conversations per brand, or used the entirety of the brand's data if there were fewer than 70,000 conversations. The total size of the general training set is 236,769 conversations. For more details on these datasets, see Appendix D. ## 4.3 Model Customization We explored three standard model customization strategies: prompt engineering, fine-tuning, and knowledge distillation. Using these strategies, we tested eleven configurations (Table 1). We evaluated three of these configurations with the judgements of AR agents, and for the remainder we extrapolated usability scores from the model's perplexity over the test set. ## 4.3.1 Prompt Engineering GPT-3 We prompted the text-davinci-003 GPT3 model (OpenAI, 2023a), following OpenAI's best practices for prompt engineering (Shieh, 2022). After some experimentation, we found that the most effective prompt for our use case (Figure 3) used a hand-constructed exemplar conversation and explicitly instructed the model to generate a response that would address the consumer's issue. see: https://knowledge.liveperson.com/data-repor ting-meaningful-conversation-score-(mcs)-meaning ful-conversation-score-(mcs)-overview.html/ Cohere Following Cohere's best practices (Cohere, 2023c), we tested both verbose and concise prompts with the XLarge Cohere model (Cohere, 2023a). Unlike GPT-3, we found that using a prompt without an exemplar conversation (Figure 3) resulted in better performance. ## 4.3.2 Fine-Tuning GPT-2 We fine-tuned GPT-2 (Radford et al., 2019) using a language modeling task over conversational data on either the brand-specific dataset or the general dataset described in section 4.2. We started with a learning rate of 0.00008 with a linear scheduler and no warm up steps and trained until perplexity plateaued. GPT-3 We fine-tuned the text-davinci-003 GPT3 model from OpenAI on a conversational promptcompletion task using instructions and an exemplar conversation as the prompt and the human-agent response as the output. The dataset consisted of 50 random examples from the brand-specific training set. Cohere We fine-tuned Cohere's XLarge model with Cohere's API (Cohere, 2023d) and a random subset of 50 conversations from the brand-specific dataset. We tested verbose and concise prompts as well as EOS token placement, and found that a shorter prompt with an EOS token after each turn worked best. ## 4.3.3 Distillation To reduce latency and cost to serve by almost half, we distilled our fine-tuned GPT-2 models using the Transformers library (Sanh, 2023), following the method set forth by Sanh et al. (2019) and the language modeling training task of Radford et al. (2019). For distillation, we used either the brand- | Model Name | %Ignore | %Edit | %Use | |--------------|-----------|---------|--------| | HUMAN | 10 | 12 | 77 | | GPT-2 BFT BD | 28 | 16 | 57 | | Cohere PE | 22 | 20 | 58 | | GPT-3 PE | 17 | 14 | 69 | ![4_image_1.png](4_image_1.png) specific or the general dataset. We started with a learning rate of 0.0005 using a linear scheduler and trained for 3 epochs. Because the OpenAI and Cohere API's do not make the logits of the whole vocabulary available at inference, we are unable to distill these models using Sanh et al.'s methodology. ## 4.4 Metrics And Results 4.4.1 Response Usability While previous work has assessed the helpfulness or usability of a response with crowd-sourced judgments (Thoppilan et al., 2022; Peng et al., 2022), we worked with nine agents at AR who already use our Conversation Assist product. For each conversation and suggested response, we asked them whether they would use the suggested response asis; edit it to change specific details, add to it, or remove parts of it; or ignore the suggestion altogether. The full annotation instructions are given in Appendix E. Table 2 shows annotated Response Usability (RU) scores for three models. Even when shown the response that an AR agent had actually used in the conversation (HUMAN), agents said that they would use this response only 77% of the time and would ignore it 10% of the time. This indicates a high level of personal preference among the agents, and sets a noteworthy upper limit on the usability we could expect from model outputs. Agents said that they would use the GPT-3 PE suggestion 69% 6This example prompt uses a fictitious brand name for anonymity. of the time compared with GPT-2 BFT BD and ![4_image_0.png](4_image_0.png) Cohere PE at only 57% and 58%, respectively. As the use rate increases, the edit rate and ignore rates both decrease, indicating that conversations resulting in editable prompts for some models can result in usable prompts for another model. We also note, that while the use rate was similar for GPT-2 BFT BD and Cohere PE, the edit rate was much higher for cohere, highlighting the importance of assessing the cost savings of an editable response vs. ignoring the response entirely. We also annotated these conversations for the Foundation Metrics in Thoppilan et al. 2022 and found a correlation between responses that were sensible, specific and role-consistent and those that the agents said they would use. Detailed analysis of these labels and their correlation are in Appendix G. This additional annotation revealed that, of the three models, GPT-2 BFT BD was most likely to generate a consumer turn rather than an agent turn or to generate a turn that was not relevant to the conversation, which may account for its high ignore rate. We also note that virtually all responses generated by the three models were labeled 'safe' by the annotators. ## 4.4.2 Perplexity Adiwardana et al. (2020) found that sensibleness and specificity corresponded with the model's perplexity, inspiring us to use perplexity to extrapolate our manual evaluation of three models to a broader set of model configurations. After reproducing Adiwardana et al.'s finding for sensibleness and specificity using our data (see Appendix J), we investigated the correlation between perplexity and response usability. For each conversation context in the evaluation set, we calculate the perplexity for the generated response for each LLM using the average log likelihood of each token, following equation (4). $$P P(W)=\sqrt[N]{1}$$ (4) P P(W) = Nq 1 P(w1,w2*,...,w*N ) | Model Name | PPL %Ignore %Edit %Use | | | | |---------------------|--------------------------|------|------|------| | GPT-2 BFT | 4.27 | 20.8 | 16.8 | 62.4 | | GPT-2 BFT BD BFT | 4.50 | 21.0 | 16.9 | 62.1 | | GPT-2 GFT BD BFT | 4.05 | 20.7 | 16.7 | 62.6 | | GPT-2 GFT GD BFT | 4.15 | 20.7 | 16.8 | 62.5 | | GPT-2 | 7.08 | 22.7 | 17.8 | 59.5 | | GPT-2 XL GFT GD BFT | 5.31 | 21.5 | 17.2 | 61.3 | | Cohere FT | 1.93 | 19.3 | 16.0 | 64.7 | | GPT-3 BFT | 4.14 | 20.7 | 16.8 | 62.5 | Table 3: Average perplexity (PPL)7and projected Response Usability (RU) scores. See Table 1 for descriptions and naming conventions for the models. Table 4: AR's estimated cost savings per model using equation 2 and the usage rates in Table 2. For models below the line, we we use extrapolated usage rates using perplexity from Table 3. The assumptions used to calculate the ENCS are described in section 4.4.3. See Table 1 for descriptions and naming conventions for these models. Using all annotated LLMs' suggested responses across all conversations in the evaluation set, we fit a set of linear regression models using the perplexity of the generated agent turn as our independent variable, and the probability of use, edit, and ignore as our dependent variables. Individual linear models trained on the output of a single LLM did not show statistical significance; however, models trained on the output of all LLMs did show significance in the F-statistic (p < 0.05 for P(edit), p < 0.001 for P(use) and P(ignore)). Extrapolating from these linear models allows us to illustrate potential cost savings for more models than we were able to annotate. These linear models predict the RU scores in Table 3. | Model Name | ENCS/message ENCS/year | | |--------------------|--------------------------|----------| | GPT-2 BFT BD | ¢4.47 | $53,653 | | Cohere PE | ¢4.58 | $55,000 | | GPT-3 PE | ¢4.24 | $50,920 | | GPT-2 BFT | ¢4.97 | $59,687 | | GPT-2 BFT BF BFT | ¢4.96 | $59,527 | | GPT-2 GFT BD BFT | ¢4.99 | $59,851 | | GPT-2 GFT GD BFT | ¢4.98 | $59,786 | | GPT-2 | ¢4.81 | $57,668 | | GPT-2XL GFT GD BFT | ¢4.90 | $58,802 | | Cohere FT | ¢4.62 | $55,391 | | GPT-3 BFT | -¢1.56 | -$18,691 | ## 4.4.3 Expected Net Cost Savings (Encs) We calculate the ENCS for each model using equation (2), repeated here in (5). (5) *ENCS* = ((P(U) ∗ SU ) + (P(E) ∗ SE) + $$\begin{array}{c}{{E N C S=((P(U)*)}}\\ {{(P(I)*S_{I})-C}}\end{array}$$ P(U), P(E), and P(I) are the frequency with which the LLM's response was accepted, edited, or ignored in the test set. SU , SE, and SI are calculated assuming that an agent costs $10.00 per hour and averages 30 seconds per message without Conversation Assist. With Conversation Assist, we assume that the agent saves 25 seconds for each accepted response, 20 seconds for each edited response and spends an extra 5 seconds for each ignored response. We also assume that each response costs ¢0.002 to generate for a GPT-2 model, ¢0.0011 for a distilled GPT-2 model, ¢1.09 for the base model and ¢6.54 for a fine-tuned model through OpenAI's API and ¢0.25 for the base model and ¢0.50 for a fine-tuned model through Cohere's API.8 Using the RU scores in Tables 2 and 3, we estimate that AR's cost savings per message would be ¢4.47 using the GPT-2 BFT BD model compared with ¢4.24 using GPT-3 PE, as detailed in Table 4. ENCS per year is calculated based on AR's annual agent message volume of 1,200,000. The factor with the largest impact on AR's cost savings is the usefulness of the predictions, as the best annotated model (GPT-3 PE)'s predictions are used or edited only 5% more often than the fastest (GPT-2 BFT BD), while its cost was almost 100 times higher (¢1.09 vs ¢0.0011). Despite this, the difference in ENCS between these two models is minimal and only amounts to about $3k per year. In general, the RU and ENCS are higher for the extrapolated results, which are somewhat less reliable, but they lead to one important insight: in this case, the inference cost for a fine-tuned GPT-3 model is too high for the customer to realize savings. ## 5 Beyond A Single Case Study To decide which of these models will lead to the greatest ROI for a brand, we must consider the break-even point for each model based on the ENCS (which includes agent labor and model inference costs) as well as R&D cost and message volume. This can be visualized with Figure 4, which 8We estimate GPT-2's cost based on a latency of 19.57 milliseconds per inference for the full-sized model and 11.60 ms for the distilled model, and a cost of $3.53 per hour renting an Nvidia A100 GPU from GCP for 8 hours a day. OpenAI and Cohere's API costs are come from OpenAI 2023b and Cohere 2023b at the time of writing. ![6_image_0.png](6_image_0.png) shows that ROI is reached when the amount that labor cost is offset (green) intersects with the amount that has been spent on the model (red). The number of suggestions needed to break even (Nr) is calculated with equation (6), using the R&D cost (CR&D), ENCS, and the cost to update and maintain the model (expressed as an average per message over time as Cm). (6) Nr =CR&D $=\frac{C_{B\&D}}{(E N C S-C_{m})}$ Given that the difference in ENCS per message across the models explored in this paper is not large, low R&D cost is the main consideration to reach the fastest ROI. For a small brand sending 500,000 agent messages per year and saving about $24,000 per year with any of the models, reducing the upfront R&D cost would be critical. On the other hand, a large enterprise brand who will save $950,000 per year over 20 million messages, will break-even on any R&D cost fairly quickly. As a model with lower inference cost will offset high R&D cost more quickly and lead to more savings over a longer period of time, inference cost is a much more important factor for a brand with high traffic. In Appendix K, we provide a detailed example of the impacts of these costs. It is also worth noting that when choosing between in-house and third-party models, the difference in R&D and maintenance cost may not be as significant as one might expect. While an in-house model requires up-front investment to train and serve, OpenAI and Cohere's LLMs at the time of writing require a fair amount of effort to prompt engineer for the best performance and these prompts should be customized to some degree for different brands and scenarios. From a maintenance perspective, we similarly find that while an in-house model must be refreshed, prompts must also be redesigned as third-party providers update and release new models. Brands might also wish to consider factors that are not accounted for in this framework. Some brands would prefer to use an in-house model so that they can retain control over their data and protect their customer privacy by limiting access of their data to third-party vendors. An in-house model also provides more control over the model's suggestions, as well as control over when the model is updated or deprecated. Especially as technology develops, models become less expensive to train, and the performance of open-source models improves, these factors may carry even more weight. ## 6 Conclusion In this case study, we demonstrated the utility of LLMs for agent assistance products, exploring 3 model adaptation strategies across 11 model configurations. Based on feedback from real customer service agents, we found that bigger is not always better, as the distilled GPT-2 model resulted in greater cost-savings than GPT-3, despite lower quality responses, because, at the time of writing, its inference cost is so much lower. These results empower near-term decision-making for integrating models like these into production. However, with the rapidly shifting NLP landscape, a framework to assess the cost benefits of new technologies is critical to facilitate decisions about integrating them into products. The flexible framework presented in this paper, ENCS, enables NLP practitioners to invest in innovations that lead to tangible business benefits. We found that for this product, the impact of model quality far outweighs inference cost, pointing to the importance of continuing to push the state of the art, while considering practical expense. This framework empowers the NLP community to invest in the most cost-effective technology for their specific needs, even as that technology, its capability, and its pricing evolve. ## Ethics Statement To protect customer and agent privacy, the data used to train and evaluate models was fully anonymized by replacing all customer or agent names, addresses, phone numbers, or other personal identifiers with a random name or string. We also compensated agents for annotations in line ## With Their Standard Rate As Agents At Ar. While the tools described in this paper have the explicit goal of making agents' jobs easier, they - and specifically the lens of a cost savings analysis - have the potential to be used to motivate reductions in workforce, and we acknowledge the impact that this can have on the agents themselves. We also note that these tools can also improve the customer experience by reducing wait times, which can lead to fewer frustrated customers when they do interact with agents. ## Limitations In this study, we collected feedback on the usefulness of model responses from customer service agents at AR. These agents were recommended based on their availability and experience with Conversation Assist; however, we did not receive details about the agents such as their level of training or experience, which may have an impact on their preferences using the suggested responses. Furthermore, while agents in our study received a flat rate per judgment with no bonus or penalties to how they judged the response, some businesses have existing agent metrics (e.g. actual handle time, AHT targets, etc.) that could incentivize the agents to behave differently while performing their jobs. These metrics have the potential to exert pressure on agents in real-life situations to accept responses at a higher rate than in this study. The linear models in section 4.4.2 are based on the judgments of 5 agents on 3 LMM model outputs for 287 conversations. While they have shown a statistically significant relationship between usage rates and perplexity, this is a small pilot analysis. Additional data will be necessary to determine how well this generalizes. Our cost savings framework also makes a number of simplifying assumptions about workforce optimization. We've noted some of these assumptions in section 3.1, and they should be considered when leveraging this framework for different types of products. In addition, while the explicit goal of these models is to make agents' jobs easier, we expect from previous work studying vigilance tasks (Warm et al., 2008) that there can be an upper bound to how much cost could be saved with an excellent LLM, as there would be less benefit from the agent acting as a human in the loop as their vigilance wanes. ## Acknowledgements We want to thank the customer service agents at AR who provided judgements on the usability of responses, as well as Anna Folinsky and Daniel Gilliam who annotated the data for Foundation Metrics. We also appreciate engineering assistance that we received from Larry Cheng, Tyson Chihaya, Sid Naik, and Fazlul Shahriar. ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Wenquan Wu, Zhihua Wu, Zhen Guo, Hua Lu, Xinxian Huang, et al. 2022. Plato-xl: Exploring the large-scale pre-training of dialogue generation. In *Findings of the Association for Computational* Linguistics: AACL-IJCNLP 2022, pages 107–118. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Cohere. 2023a. Generation. https://docs.cohere. ai/docs/generation-card. Accessed: 2023-0216. Cohere. 2023b. Pricing. https://cohere.ai/pricin g. Accessed: 2023-02-16. Cohere. 2023c. Prompt engineering. https://docs.c ohere.ai/docs/prompt-engineering. Accessed: 2023-02-16. Cohere. 2023d. Training custom models. https://do cs.cohere.ai/docs/training-custom-models. Accessed: 2023-02-16. Google. Google cloud pricing calculator. https: //cloud.google.com/products/calculator. Accessed: 2023-02-17. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. 2016. Eie: efficient inference engine on compressed deep neural network. *ACM SIGARCH Computer Architecture News*, 44(3):243–254. Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. *Advances in neural information processing systems*, 28:1135–1143. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. *Advances* in Neural Information Processing Systems, 33:20179– 20191. Kristen Howell, Jian Wang, Akshay Hazare, Joseph Bradley, Chris Brew, Xi Chen, Matthew Dunn, Beth Ann Hockey, Andrew Maurer, and Dominic Widdows. 2022. Domain-specific knowledge distillation yields smaller and better models for conversational commerce. *ECNLP 2022*, page 151. Huggingface. Export to onnx. https://huggingf ace.co/docs/transformers/serialization. Accessed: 2023-02-17. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? *arXiv* preprint arXiv:2202.12837. NVIDIA. a. Performance analyzer. https://github .com/triton-inference-server/server/blo b/main/docs/user_guide/perf_analyzer.md. Accessed: 2023-02-17. NVIDIA. b. Triton inference server. https://gi thub.com/triton-inference-server/server. Accessed: 2023-02-17. OpenAI. 2023a. Models: Gpt-3. https://platfo rm.openai.com/docs/models/gpt-3. Accessed: 2023-02-16. OpenAI. 2023b. Pricing. https://openai.com/api /pricing/. Accessed: 2023-02-16. Baolin Peng, Michel Galley, Pengcheng He, Chris Brockett, Lars Liden, Elnaz Nouri, Zhou Yu, Bill Dolan, and Jianfeng Gao. 2022. Godel: Large-scale pre-training for goal-directed dialog. arXiv preprint arXiv:2206.11309. Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2021. Soloist: Building task bots at scale with transfer learning and machine teaching. Transactions of the Association for Computational Linguistics, 9:907–824. R Core Team. 2021. *R: A Language and Environment* for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Minho Ryu and Kichun Lee. 2020. Knowledge distillation for BERT unsupervised domain adaptation. arXiv preprint arXiv:2010.11478. Victor Sanh. 2023. Huggingface distillation documentation. https://github.com/huggingface/tra nsformers/blob/main/examples/research_pr ojects/distillation/README.md. Accessed: 2023-02-16. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. In NeurIPS EMC2 *Workshop*. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT. In *AAAI*, pages 8815–8821. Jessica Shieh. 2022. Best practices for prompt engineering with openai api. https://help.opena i.com/en/articles/6654000-best-practices -for-prompt-engineering-with-openai-api. Accessed: 2023-02-16. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. *arXiv preprint arXiv:2201.08239*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Joel Warm, Raja Parasuraman, and Gerald Matthews. 2008. Vigilance requires hard mental work and is stressful. *Human factors*, 50:433–41. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*. Edward WD Whittaker and Bhiksha Raj. 2001. Quantization-based language model compression. In Seventh European Conference on Speech Communication and Technology. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## A Training Details: Model Fine-Tuning GPT-2 We fine-tuned the pretrained GPT-2 model from huggingface using either the brandspecific AR or general dataset. Each training example has an end-of-text token appended to the beginning and end of the conversation and is padded with an added pad token. The resulting model has 117M parameters and a vocabulary size of 50258 (GPT-2 vocab size with an additional pad token). We started with a learning rate of 0.00008 with a linear scheduler and no warm up steps. The model was trained for 34000 steps across 4 Nvidia Tesla V100 GPUs, which equates to roughly 3 epochs for the AR dataset and 5 epochs for the general dataset. GPT-3 We fine-tuned GPT-3 with promptcompletion pairs using the OpenAI API. We trained for 4 epochs using a total of 50 examples that were selected and split at random human-agent turns to append the preceding conversation to the prompt and the human-agent turn as the completion. Additionally, the prompt included a brief summary of the context before giving the conversational context, which includes a separator sequence to delineate the summary and the conversation. An example of a prompt-completion pair is given below: ![9_image_0.png](9_image_0.png) prompts and whether or not to use an end-ofsequence token between conversation turns. These selections were all motivated by the Cohere guide for prompt-engineering, which applies to both training and inference. The first prompt we experimented with was longer and more verbose, using sequences to indicate which part of the prompt was the instruction and which was the conversation to complete. The second prompt we used was shorter and did not have clear delimiters between the instructions and conversation. The full prompts can be seen in in the prompt engineering appendix (Appendix C). ## B Training Details: Gpt-2 Distillation GPT-2 We distilled our fine-tuned GPT-2 models using the distillation code provided by Huggingface. The dataset was preprocessed with the same beginning and ending tokens as in the fine-tuning stage. The resulting model has 81M parameters across 6 layers, reduced from 117M parameters across 12 layers with the same vocabulary size. Training started with a learning rate of 0.0005 using a linear scheduler and ran for a maximum of 3 epochs on 1 Nvidia Tesla V100 GPU. This resulted in 67,014 and 164,352 steps for distilling on the AR and general datasets, respectively. ## C Prompt Engineering GPT-3 We experimented with several prompts before choosing one that gave adequate results without consuming too much of the token limit. That is, we wanted to provide enough information to get the best results in the most concise way. First, we varied the verbosity of the framing of the request, changing factors such as whether the brand name was provided or whether there was a description of the product line: ![9_image_1.png](9_image_1.png) The quality of the responses did not vary based on the amount of detail given here, nor did they change when this was omitted, so we chose to omit it. The next aspect we varied was the amount of detail given in the description of the examples: Here are examples of good interactions ![10_image_1.png](10_image_1.png) Here are examples of good interactions between a consumer and an agent where the agent is able to address the consumer's question Here is an example of a good consumer agent interaction where the agent is able to address the consumer's question. Consumer turns start with "CONSUMER:", customer service representative turns start with "AGENT:" And finally, we varied the description of the task we requested: Your job is to generate the next agent turn for the following conversation Your job is to generate the next agent turn for the following conversation to properly address the consumer's question. Results were best when the words "to properly address the consumer's question" were provided, but it did not matter whether they appeared in describing the examples or in the final instruction. Based on these findings, we selected the following prompt framing to use in the GPT-3 experiments: Here are examples of good interactions between a consumer and an agent. <sample conversation> Generate the next agent turn for the following conversation to properly address the consumer's issue <conversation> The next task was to find an exemplar conversation to use in the prompt. The prompt used with the example conversation (few shot, n = 1) and without (zero shot) did not differ in the quality of the responses, though it did differ in the exact wording (we also found that 2 runs in a row, same conditions, had similar differences in wording), showing that in these cases, the example we give it did not greatly affect the appropriateness of the response. Therefore, we went with a generic, hand-curated example based on observing trends in the data: AGENT[human]: Hello! Thank you for connecting with The Republic of Fashion. I will be happy to assist you. CONSUMER: Hi. I wanted to follow up on my order? It hasn't arrived yet. AGENT[human]: Ok. Could I get your order ![10_image_0.png](10_image_0.png) number? ![10_image_2.png](10_image_2.png) ![10_image_3.png](10_image_3.png) AGENT[human]: Please allow me 1-2 minutes to ![10_image_4.png](10_image_4.png) look this up. It looks like your order is in progress. It is due to be shipped tomorrow. You will receive an email with the tracking number once it ships. Is there anything else I can help you with today? Thank you for contacting The Republic of Fashion! Cohere For Cohere prompt engineering, we experimented with two separate prompts based on the instructions given in the Cohere prompt engineering documentation and the efforts that were made towards GPT-3 prompt engineering. The first prompt we used was a shorter prompt that did not include delimiting to indicate which part was instruction and which was the conversation to complete. The second prompt was more verbose and used the Cohere prompt engineering guidelines to indicate instruction and conversation. In both cases, we followed Cohere's recommendation on using stop-sequences by inserting <EOS> at the end of every turn. Without the stop sequence, Cohere would continue to generate multiple agent and consumer turns until it hit the maximum token count. With the stop-sequence, the Cohere model would only generate a single agent turn. Additionally, both prompts end with "AGENT[human]:" to prompt the model to generate the human-agent turn every time. The shorter prompt ultimately performed better so we only report the results for prompt engineering using the shorter prompt, however, both prompts used are given below: ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ## D Dataset Details We constructed our brand-specific dataset using conversational data from our case-study brand, Anonymous Retailer (AR), from every month of the year 2022. From the year's data, we removed conversations that did not meet the following criteria: - 2 or more agent turns - an automated conversational quality score of neutral or higher 9 - proportionally more human agent then bot turns From the remaining data, we randomly sampled 100 conversations per month for a development and test set. The final test set contains 287 conversations that were chosen to represent a variety of common scenarios where the agent's response was not always dependent on a database-style lookup, and therefore could be reliably generated without a database integrated on the back-end. The development set was used to experiment with different prompt engineering configurations. The remaining data, not sampled for the development or test sets, was used for fine-tuning. Specific dataset sizes are given in Table 5, which shows 9We used LivePerson's Meaningful Conversation Score. For more details, see: https://knowledge.liveperson.c om/data-reporting-meaningful-conversation-score -(mcs)-meaningful-conversation-score-(mcs)-overv iew.html/ ![11_image_0.png](11_image_0.png) Table 5: Size of fine-tuning data sets the number of conversations, messages, and the average count of agent turns per conversation. All data was de-identified using an internal Personally Identifiable Information (PII) masker that replaces personal names, locations, and digit strings with a random stand-in. The evaluation set, which would undergo a round of human annotation, was reviewed to ensure that agent and consumer names, order numbers, addresses, etc, were internally consistent within a conversation. For the general dataset, we chose five retail brands whose product lines were a close match to AR's. These were filtered using the same method that was applied to the AR data. We then sampled 70,000 conversations from each brand, or used all the data available if the brand had less than 70,000, resulting in 236,769 conversations, as shown in Table 5. ## E Annotation Scheme: Response Usability As described in 4.4.1, to evaluate the usefulness of suggestions to agents, we asked nine agents from AR to look at turns in a conversation and tell us, based on their experiences as an agent for AR, whether the suggestion was one that they would use, edit, or ignore. Agents were given access to an internal annotation tool where they viewed conversations one at a time, with names and numbers replaced with random stand-ins to protect personally identifiable information, so that they could decide with the correct context what they would do in a given suggestion. They were given the following guidelines: ## Context What we're building: We want to build a tool that will offer agents suggestions for what to say next in conversations with customers. The tool would be like a powered-up Conversation Assist, where custom recommendations would be based on the entire conversation. We are investigating different techniques to train machine learning models so they can offer responses that are specific to a brand, and we want to understand how well they work. We want your guidance: We want to directly use your expertise as agents to evaluate how good these models are at giving you useful suggestions. We'll show you snippets of real conversations between a customer and a human agent, one at a time, as well as a suggestion for the next agent message. We would like you to consider the suggestion in the context of the conversation and decide whether you would use it, edit it, or ignore it. ## Instruction Our goal for this task is to evaluate the models that will be responsible for suggesting possible agent responses. This helps us understand exactly how useful they would be to agents like you, and gives us data to improve our models. The real conversations you'll see are specific to AR, with names and numbers replaced with a random stand-in to protect personal identifiable info. We have also replaced references to AR with a made-up brand, Republic of Fashion. We'll ask you to look at turns in a conversation and tell us, based on your experiences as an agent for AR whether the suggestion is one that you would use, edit, or ignore. The quality of these suggestions will be widely varied. Please make your decisions both on the content of the suggestions, and whether they match the appropriate tone for AR. We encourage you to go with your instincts here on what you would prefer to do in a real conversation. For example, the line between editing a response vs. ignoring it is often flexible, depending on how much editing you think it needs. We want to build tools that are the most useful to you, so feel free to go with your gut. It's possible that you could see the same conversation shown with an alternate suggestion at another point. That's fine - we don't need to compare differences in the suggestions. Our goal for this evaluation is to understand: would an experienced agent use the suggestion or not? The data from this will help us improve our suggestion models. You can find more details on the labels below. We won't be asking you to provide reasons for your responses. In the future, we might ask to do focus groups, or interviews to learn more about your thought process and why you selected answers, but it's not required for this task. Use suggestion: Select this label if you would use this message as-is if you were the agent handling this conversation. This includes: if you would make a formatting change (for example, splitting the turn into multiple messages) and if you would use the message suggested, and also send additional messages afterwards Edit suggestion: Select this label if you would choose this message, and then make edits before sending. Edits in this case include instances where personal or factual information (consumer names, agent names, discount percentages, etc.) would need to be verified and changed. The amount of editing needed does not matter; if you would change the message at all before sending it, please select this label. Ignore suggestion: Select this label if you would not use or edit the suggestion, but would rather type your own message. There are many valid reasons not to use a suggestion (it's irrelevant, repetitive, inappropriate, etc). In any case where the annotation tool does not properly display a suggestion, choose the fourth option, "No suggestion displayed". ## F Annotation Scheme: Foundation Metrics To better understand the Response Usability results, we annotated each response following a variation of the Foundation Metrics from Thoppilan et al. ![13_image_0.png](13_image_0.png) (2022). Our internal team of professional annotators labeled responses from the evaluation dataset for Sensibleness, Specificity, Safety, Informativeness, Helpfulness and Role-consistency, following the guidelines laid out by Thoppilan et al., with some concessions made for effort and information available. We omit Interestingness, as we found it irrelevant in a customer support setting. Additionally, because the models in this case study are not connected to the back-end system that the agents use to look up account details, we do not consider the accuracy of entities and therefore omit Groundedness and made adjustments to our understanding of Informative. The metrics used and their guidelines for annotation are below: Sensible: A suggestion is sensible if it is a logical continuation of the conversation, or a logical follow-up question or request. It also does not contradict earlier information given by the Agent in the conversation. A suggestion can be sensible or not sensible regardless of whether or not it is Specific or Informative. Specific: A suggestion is specific if it shows understanding of the context of the conversation. This may be shown in a reflection of something mentioned earlier in the conversation, a reflection of the question the consumer is trying to answer, etc. Whether or not a suggestion is Specific was considered only if Sensible = true. Informative: A suggestion is Informative if it provides factual information that would be able to be shown to be correct or incorrect. Smalltalk or opinions would not be Informative; statements about order numbers, general policies or available time slots would be. Agent actions taken that could be true or untrue (I've forwarded your inquiry/I've resent your package) would also be Informative. As mentioned, because the nature of the suggested responses was often specific to AR and these annotations were not done with access to the AR knowledge base, we had no basis on which to judge Groundedness as outlined in LaMDA (Thoppilan et al., 2022). Therefore, we treated each suggestion as if it contained true information. In other words, regarding Informativeness, we did not check whether the information was correct, only whether the statement contained information that could be judged correct or incorrect. Like Specificity, Informativeness was only considered if Sensible = true. Helpful: A suggestion is Helpful if it is first Informative (i.e., could be judged on correctness, as above). Then, given a presumption that the information provided is correct, it is Helpful if it fits the standard definition of "helpful" as judged by the annotator. Helpful should only be considered if Informative = true. Safety: A suggestion is considered Safe if it does not contain content that: could cause users mental or physical harm; may be misinformation about public figures or events; could be construed as financial advice or an unsubstantiated health and safety claim; has obscene (violent/gory, sexual, profane, or bigoted) material; reveals personal information that appears to be outside the context of the conversation (not related to the consumer or company). Safety was considered independent of other metrics. Role-consistency: The response looks like something a consumer-facing agent might say, consistent with the role of an agent for AR. This consistency does not rely on being consistent with other information in the conversation and is considered independent of other metrics; that ![14_image_0.png](14_image_0.png) information is captured in Sensibility. Figure 5 further illustrates the way we considered these metrics interdependent. ## G Foundation Metrics Results And Analysis The Foundation Metrics label frequencies for each model are shown in Figure 6. As noted in Appendix F, accuracy of entities is not reflected in these metrics as it was not considered. Instead, that information is captured to some degree by the response usability metric, which allows agents to indicate that they would edit the response. For Foundation Metrics, the GPT-3 PE responses were rated on par with the HUMAN responses, and it was considered even more specific and helpful than the human.10 We found that GPT2 BFT BD was much worse than the other models. To better understand the relationship between Response Usability and Foundation Metrics, we calculated the Pearson correlation coefficient (Table 6). The strongest positive correlations are between "sensible", "specific" and "role-consistent" and "use", while the strongest negative correlations are between those labels and "ignore". "Edit" does not correlate strongly with any labels, which we take as an indication that there are a wide range of reasons to edit messages, from the presence of information to the inclusion of non-sensible phrases amidst more useful text. It should be noted that very few of the generated responses were judged not "safe", hence the low correlations to all Response Usability measures. GPT-2 BFT BD outputs were labeled "ignore" by the agents much more often relative to "edit" than they were for the other models. The Foundation Metrics shed light on this, as GPT-2 BFT BD has the lowest score for each of these metrics, with the exception of "safe", which did not correlate with usability11. This suggests that GPT-3 PE and Cohere PE are more often able to produce something sensible and specific, even when the full response is not usable, compared with GPT-2 BFT BD. \begin{tabular}{l|c c c c c} & Sensible Specific Information.Helpful Safe Role-consis. \\ \hline Use & 0.44 & 0.29 & 0.14 & 0.14 & 0.02 & 0.37 \\ Edit & -0.15 & -0.07 & 0.00 & -0.02 & 0.01 & -0.09 \\ Ignore & -0.45 & -0.31 & -0.18 & -0.17 & -0.04 & -0.40 \\ \end{tabular} Table 6: Pearson Coefficient, showing correlation between Response Usability and Foundation Metrics labels. ## H **Response Usability Annotator Analysis** As mentioned in Section 4.4.1, nine different annotators annotated the usability of different models' suggested responses, and we gathered five annotations per response. We calculate the agreement level using Fleiss Kappa. The overall agreement level and the agreement level for each model are shown in Table 7. In addition, we show the average suggested response length (as number of tokens) for tasks with high agreement rates. These averages are shown in Figure 7 for responses with three or more annotations with the same label (either 'use'/'edit'/'ignore'), and for tasks with even higher agreement with four or more of the same label. Annotations of empty suggestions ('No Suggestion Displayed') are counted as 'ignore'. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) | Model | GPU Latency Throughput Cost/Inference (ms) (infer/s) (¢) | | | | |-------------------|------------------------------------------------------------|-----|--------|--------| | GPT-2-XL | V100 | 620 | 16 | 0.0468 | | GPT-2 | V100 | 47 | 211 | 0.0036 | | GPT-2 DISITL V100 | 25 | 398 | 0.0019 | | | GPT-2-XL | A100 | 128 | 78 | 0.0126 | | GPT-2 | A100 | 20 | 510 | 0.0019 | | GPT-2 DISITL A100 | 12 | 859 | 0.0011 | | ## I Inference Cost In production at peak hours, we require that our models handle at least 500 inferences per second, 32 concurrent messages, with a latency of no more than 500ms/inference. We performed model benchmarking and cost estimation on the Google Cloud Platform (GCP) Google Kubernetes Engine to determine the minimum hardware requirements for serving our fine-tuned GPT-2 models of three different sizes: GPT-2 with 117M parameters, GPT-2distilled with 81M parameters, and GPT-2-XL with 1.5B parameters. We converted our PyTorch model checkpoints to Onnx (Huggingface) and served the models with the optimized NVIDIA Triton Server (NVIDIA, b) using their natively supported onnxruntime backed. We then performed load and latency benchmarking using Triton's Performance Analyzer (NVIDIA, a) tool on both 1 NVIDIA V100 with 16GB of GPU memory and 1 NVIDIA A100 with 40GB of GPU memory. Both configurations were set up with 30GB of CPU Memory and with a limit of 4 CPU cores. Table 8 shows the performance of each model per GPU type12. We calculate cost per inference using the Google Cloud Pricing Calculator (Google) for a GKE Node Pool to first price each GPU. On GCP, there is a sustained use discount depending on how many hours the GPU node is in use. In a production setting, one could rent some GPUs 24/7 at a lower rate, and additional GPUs at a higher hourly rate to handle peak loads. We approximate this variation by using the 8hr/day pricing option. The cost of the V100 GPU Node was listed to be $661.14/month at 243.33 hours or $2.72/hour, and the cost of the A100 GPU $858.16/month or $3.53/hour. Using the latencies in Table 8, we report the cost per inference in cents. The A100 GPU was found to be less expensive per inference than the V100 GPU because it was over two times faster. As expected, the distilled model was by far the fastest and least expensive, with a relative improvement of 1.7x that of the GPT-2 model and 25x that of the GPT-2 XL model. ## J Linear Modeling Response Usability To calculate the relationship between the human-annotated response usability judgments and perplexity, we converted the counts of each label to a probability distribution. We then trained a linear model using the R (R Core Team, 2021) base lm function, using the perplexity of the generated utterance as the independent variable, and the probability of the usage statistic as the dependent variable. Prior to fitting a linear model, we removed outliers using the Interquartile Range (IQR) method. This method was applied to each subset of the data, and to the entire dataset, independently. ![16_image_0.png](16_image_0.png) Linear models calculated from the perplexities of the outputs of individual LLMs did not show statistical significance, likely due to the small datasets (n = 287). Across all LLMs, however, all linear models showed statistical significance (p < 0.05 for P(edit), p < 0.001 for P(use) and P(ignore)), so all equations are derived from the aggregated data (n = 861). Figure 8 shows the fit of the linear models for P(use), P(ignore), and P(edit) respectively. The X-axis represents the perplexity of the generated output while the Y-axis represents the probability of the agent selecting the metric. As expected, the probability of an agent choosing to use a suggestion decreases as the perplexity increases. Edit and ignore are largely a matter of personal preference, so while the general trend is that the probabilities of both increase as perplexity increases, the effect of perplexity is not as strong as it is for the probability of using the suggestion. These linear models show significance in the Fstatistic (p < 0.05 to p < 0.01). This indicates that the null hypothesis is rejected and that there is a relationship between the perplexity of the generated output and the agent's choice to use, ignore, or edit the suggestion. Foundation Metrics We used same method to calculate the relationship between the humanannotated foundation metrics and the perplexity of the generated output, except that we did not need to convert the annotations into a probability distribution, because the foundation metrics were a binary judgment. Perplexity outliers were removed from this data using the IQR method, and the R (R Core Team, 2021) base lm function was used to fit a linear equation to the data. Figure 9 shows the fits of the linear equations for the foundation metrics, Sensible, Specific, Informative, Helpful, and Role Consistent. Since very few of the generated suggestions were judged to be not safe, this model did not show significance. The X-axis represents the perplexity of the generated output, while the Y-axis represents the judgment of the selected foundation metric. As expected, and extending the findings of Adiwardana et al. (2020), all metrics decrease as the perplexity increases. These linear models also show significance in the F-statistic (p < 0.05), indicating that there is a relationship between the perplexity of the generated model and the human judgments of the foundation metrics. Discussion This is a pilot study where 861 generated responses were judged by 5 annotators for the Response Usability metrics, and 3 annotators reached consensus on the Foundation Metrics. These models show that there is a significant relationship (p < 0.05) between perplexity and human judgments of Response Usability and the Foundation Metrics. These models, however, are considered only a starting point from which to build. ## K Cost-Savings Model Below we have the cost-saving models that we developed as the basis of ENCS (section 3). Table 9 focuses on cost-savings, using estimates for operation scale, labor cost, and conversational data volume from AR as well as other brands. Since we investigated both third-party LLM inference costs, as well as in-house inference costs, we made sure to include both as part of the model, and ultimately calculated cost savings per message with each model type. Table 10 focuses on the R&D cost estimation for building and maintaining the model, which was omitted from the main body of the paper. This uses some ballpark estimates (e.g. 3 months of developer time to build a model, developer salaries, ![17_image_0.png](17_image_0.png) and amortization period) to estimate the overall monthly R&D cost of building and maintaining an in-house model. Table 11 uses the R&D cost to build the model, and calculates a break-even point to answer the question: how many assisted messages does it take for the cost-savings to effectively cover the development cost of the model. This was also calculated as a number of months, based on the estimated messaging traffic, and total number of agent messages per month. For the example described in the table, the model could break even in less than two weeks of operation. As described further in section 5 this calculation could be very different for a lower-traffic brand. | 1. Operation scale # agents | 500 | |----------------------------------------------------------------------------|---------------| | # conversations per agent per month | 500 | | Total agent messages | 3,750,000 | | Total consumer & agent messages per month | 7,500,000 | | 2. Labor cost Agent hourly rate | $10 | | Labor cost | $866,667 | | 3. Volume of conversational data Average length of conversation (messages) | 30 | | # Consumer messages | 15 | | # Agent messages | 15 | | Average message length (char.s) | 150 | | Average # of characters per conversation | 4,500 | | Average conversation volume per month (char.) | 1,125,000,000 | | Average conversation volume per month (tokens) | 281,250,000 | | 4. Model inference cost In house Usage per 1000 tokens | $0.0016 | | Monthly usage cost | $450 | | Monthly usage & R&D cost | $6,017 | | LLM recommendation cost/message | $0.0016 | | 3rd party Usage per 1000 tokens | $0.12 | | Monthly usage cost | $33,750 | | Monthly usage & R&D cost | $39,316.67 | | LLM recommendation cost/message | $0.010 | | 5. Agent time saving estimation Time to read & accept suggestion (sec) | 5 | | Time to read & edit suggestion (sec) | 10 | | Time to reject suggestion and compose (sec) | 30 | | Probability of accepting | 0.7 | | Probability of editing | 0.15 | | Probability of rejecting | 0.15 | | Avg agent time/msg with LLM assistance (sec) | 9.50 | | Avg agent time/msg without LLM assistance (sec) | 30 | | Agent time saving (%) | 68% | | 6. Cost saving per message Labor cost/msg with LLM | $0.03 | | Labor cost/msg without LLM | $0.08 | | In house Total cost in-house model assisted (labor + recommendation) | $0.03 | | Cost saving in-house model assisted vs unassisted ($) | $0.06 | | Cost saving in-house model assisted vs unassisted (%) | 66% | | 3rd Party Total cost 3rd party model assisted (labor + recommendation) | $0.04 | | Cost saving 3rd party model assisted vs unassisted ($) | $0.05 | | Cost saving 3rd party model assisted vs unassisted (%) | 56% | | Table 9: Cost Saving Estimates | | | 1. Cost to build a model Project effort, dev/months | 3 | |---------------------------------------------------------|----------| | R&D labor cost | $50,000 | | Model training cost | $100 | | Total cost to build a model | $50,100 | | Amortization period, years | 3 | | Amortized model development cost per month | $1,392 | | 2. Cost to maintain model Project effort, dev/months | 3 | | R&D labor cost | $50,000 | | Model training cost | $100 | | Total cost of model maintenance per year | $50,100 | | Cost of model maintenance per month | $4,175 | | Monthly R&D cost Monthly R&D cost to build and maintain | $5,567 | | US AI developer FTE rate | $200,000 | | Table 10: R&D Cost Estimates | | | In-house # of model assisted messages to break even | 905,313 | |-------------------------------------------------------|-----------| | Time to break even, months | 0.24 | | 3rd Party # of model assisted messages to break even | 1,078,347 | | Time to break even, months | 0.29 | | Table 11: Break Even Point Estimates | |
nussbaum-thom-etal-2023-application
Application-Agnostic Language Modeling for On-Device {ASR}
https://aclanthology.org/2023.acl-industry.25
On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve several ap- plications with different distributions at once, such as communicating with a virtual assistant and speech-to-text. The simplest solution to serve multiple applications is to build application-specific (language) models, but this leads to an increase in memory. Therefore, we explore different data- and architecture-driven language modeling approaches to build a single application-agnostic model. We propose two novel feed-forward architectures that find an optimal trade off between different on-device constraints. In comparison to the application-specific solution, one of our novel approaches reduces the disk size by half, while maintaining speed and accuracy of the original model.
# Application-Agnostic Language Modeling For On-Device Asr Markus Nußbaum-Thom Lyan Verwimp Youssef Oualil Apple {mnussbaumthom,lverwimp,youalil}@apple.com ## Abstract On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve several applications with different distributions at once, such as communicating with a virtual assistant and speech-to-text. The simplest solution to serve multiple applications is to build application-specific (language) models, but this leads to an increase in memory. Therefore, we explore different data- and architecture-driven language modeling approaches to build a single application-agnostic model. We propose two novel feed-forward architectures that find an optimal trade off between different on-device constraints. In comparison to the applicationspecific solution, one of our novel approaches reduces the disk size by half, while maintaining speed and accuracy of the original model. ## 1 Introduction On-device Automatic Speech Recognition (ASR) is subject to several constraints: it should return accurate results in a reasonable time frame without consuming too much memory and disk space. State-of-the-art research often is accuracy focused, while resource-constrained applications also need to take care of performance and size. Finding an architecture that reaches all constraints is not trivial. Another challenge is that ASR systems often serve a large variety of requests. ASR systems can serve an on-device Virtual Assistant (VA) but also allow dictated messages, notes, e-mails, etc. – we refer to the latter application as Speech-to-text (STT). Typical VA requests are knowledge-driven questions such as *"how old is Barack Obama?"* or commands, e.g. *"play some Lady Gaga music"*. STT requests are longer and of a different nature than typical VA requests. The solution that yields the best accuracy for both VA and STT is to train separate models for each application, but additional model size is prohibitive. We aim to develop a single model instead. In this paper, we focus on a Neural Network Language Model (NNLM) in the ASR system. Our baseline is a Fixed-size Ordinally-Forgetting Encoding (FOFE) feed-forward NNLM (Zhang et al., 2015). In ASR, the search space can easily increase so we have to limit the context length used in decoding to reach an acceptable latency and lower memory. Given this short context length, we find that the FOFE feed-forward LM is competitive to the Transformer (Vaswani et al., 2017) in terms of accuracy and better in terms of latency. Irie (2020) has also shown that Transformers are less robust to short context lengths. To build a single Application-Agnostic (AA) NNLM, we developed a method to optimally sample training data. We sample data from different sources, e.g. anonymized and randomly sampled user requests from opted-in users for VA and STT and artificial requests spanning many different domains that focus on improving the tail of the distribution. The data-driven approach tries to find the optimal balance between the application-specific data sources by creating a balanced development set and distributing the sampling weights based on the importance of each data source and each application on that development set. Training a single FOFE NNLM on the combined dataset can lead to accuracy degradations, even with a larger model or longer training. We explore two extensions to the baseline FOFE NNLM: firstly, a Mixture FOFE NNLM (Oualil and Klakow, 2017; Irie et al., 2018) which is composed of an ensemble of parallel sub-networks and a mixture subnetwork generating normalized probabilities across all sub-networks. These mixture weights are used to compute a weighted average of the ensemble before the softmax output. The second extension is an Application-Dependent (AD) FOFE NNLM 268 that has different sub-networks for each application. At training time, data and gradients are (back- )propagated only through the corresponding subnetwork belonging to an application. At inference time, the way the user invokes ASR tells us which application is needed (wake phrase = VA, microphone button = STT) and only the sub-network belonging to the active application is used. Both approaches are able to match or outperform the application-specific model. While the accuracy of the mixture NNLM is slightly better than the ADNNLM the situation is reversed in terms of speed. The contributions of this paper are as follows: - We propose a method to optimally combine application-specific data sources to train an application-agnostic LM in Section 3. - We propose two novel FOFE-based neural LMs in Section 4 that each match the accuracy of two application-specific language models. - In Section 6 we compare the novel NNLMs accuracy and speed against the baseline FOFE and state-of-art Transformer models. We do this for three different languages - US English, German and Mandarin Chinese - and three types of test sets (see Section 5 for more information). ## 2 Related Work We start by discussing related work on modeling several domains/tasks at once. Many pattern recognition tasks are imbalanced since data from different categories do not occur at the same frequency. Therefore, the less frequent categories are not well represented in the training data (Anand et al., 1993; Johnson and Khoshgoftaar, 2019), which results in a sub-optimal model. Data-driven approaches to deal with the data imbalance include under- and over-sampling (Van Hulse et al., 2007). Refinements of these methods select data more intelligently (Kubat and Matwin, 1997; Chawla et al., 2002; Zhang and Mani, 2003; Barandela et al., 2004). Others approaches modify the training and/or model architecture. Curriculum Learning (Bengio et al., 2009; Shi et al., 2015) emphasizes data by fine-tuning towards the corpus consumed by the end of training. Smith et al. (2020) experiment with multi-task learning, data augmentation and a classifier combined with single-task models to appropriately model several skills in a conversation agent. Balancing through interleaved sampling of different corpora was investigated in (Xing et al., 2022) as well as model-based approaches like multi-task and weighted learning, which allows the model to self-control the impact of different corpora. Other ways to increase the modeling power are using a Mixture of Experts (Shazeer et al., 2017; Zhou et al., 2022) or ensemble networks (Oualil and Klakow, 2017; Irie et al., 2018; Ganaie et al., 2022). The choice of architecture for language modeling has also been a recurrent topic of research. Early neural LMs use feed-forward layers (Schwenk and Gauvain, 2002; Bengio et al., 2003). Mikolov et al. (2010) introduced recurrent neural LMs that can in principle use unlimited history. These networks are trained with backpropagation through time which 'unrolls' the network in time for gradient computation, but this leads to vanishing gradients (Bengio et al., 1993; Pascanu et al., 2013), essentially limiting the history that can be learned from. Gated recurrent architectures (Sundermeyer et al., 2012; Cho et al., 2014) mitigate this problem. Recent extensions of the feed-forward architecture have been proposed that alleviate different disadvantages. Zhang et al. (2015) proposed a FOFE, which represents a sequence of words as a vector with fixed length that captures the word order. They show that feed-forward networks with FOFE encoding outperform recurrent models in language modeling. The most widely-used architecture in recent years, is the Transformer (Vaswani et al., 2017) that combines feed-forward layers with multi-head attention, residual connections, and layer normalization (Ba et al., 2016). It has been successfully applied to ASR, see e.g. (Irie et al., 2019; Beck et al., 2020). In this paper, we compare FOFE feed-forward LMs with Transformer LMs and two extensions of the base FOFE feed-forward LMs. ## 3 Data Balancing The ASR system in this paper serves two applications, VA and STT, for which we observe very different linguistic patterns. To demonstrate these differences, we calculate statistics on two English development sets. Each data set contains 23k anonymized queries and is randomly sampled from real user data similarly to the test sets described in Section 5. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) VA requests are typically shorter than STT requests. In Figure 1, we plot the number of queries (on a logarithmic scale) that have x number of words for both data sets. For example, in the VA dev set there are 9968 requests with only two words (238 requests consist of only *"<wakeword_1>* <wakeword_2>"), while the STT test set contains 1327 requests with two words. If we define a request of 30 or more words as a 'long' request, we see that the STT test has 2030 long requests while VA has only 21 long requests. Secondly, the content and style of the requests varies between the two applications. Figure 2 plots the union of the top 10 most frequent words in each data set - ordered by the frequency in the VA dev set. Notice that we allow the user to also dictate punctuation marks, hence the presence of the dot in the list of words. It is clear from this distribution that VA queries are often questions (*what*) or commands (*call, text*) while STT queries are often messages from the perspective of the user (*I, you*) who wants to make their message more readable with punctuation marks. Because of the different linguistic nature of these two applications, balancing the NNLM training data has a large impact on the quality of the model. A common strategy to determine NNLM sampling weights for each application is to train individual n-gram LMs on each data source and choose relevance weights based on the optimal linear interpolation weights on a development set (Raju et al., 2019). In our setup, the sampling weights for the application-specific text sources are derived from the count merging weights (Bacchiani et al., 2006; Hsu, 2007; Pusateri et al., 2019) instead of a linear combination. We propose a balancing scheme to derive sampling weights for I text sources that benefit both applications. We create a balanced development set containing approximately the same amount of VA and STT data. Let α1*, . . . , α*I ∈ [0, 1] be the sampling weights such that PI i=1 αi = 1 and ρ(i) ∈ {*D, A*} indicating if the text source belongs to STT or VA. The redistribution probability masses βD and βA for STT and VA respectively are calculated to serve the joint application. These probability masses are determined by the optimal weights that minimize the perplexity of the linear Application-Specific (AS) language model combination on the balanced development set. The application-specific probability mass allocated by each application can be formalized as: $$\overline{{{\alpha}}}_{D}:=\sum_{i,\rho(i)=D}\alpha_{i}\qquad\mathrm{and}\qquad\overline{{{\alpha}}}_{A}:=\sum_{i,\rho(i)=A}\alpha_{i}.$$ Now consider the ratio between the redistribution and application-specific probability mass: $$\gamma_{A}:=\frac{\beta_{A}}{\overline{\alpha}_{A}}\qquad\text{and}\qquad\gamma_{D}:=\frac{\beta_{D}}{\overline{\alpha}_{D}}.$$ These ratios determine the scaling of the origi These ratios determine the scaling of the original sampling weights to achieve balancing. Balanced sampling weights are then determined by a re-normalization of the scaled sampling weights: $$\lambda_{i}:=\frac{\gamma_{\rho(i)}\alpha_{i}}{\sum_{j}\gamma_{\rho(j)}\alpha_{j}},\qquad i=1,\ldots,I.$$ The heldout and training set for NNLM training is then randomly sampled from the text sources according to the balanced sampling weights. ![3_image_0.png](3_image_0.png) ## 4 Application-Agnostic And Application-Dependent Fofe Nnlms In this section three different types of NNLM architectures are introduced for on-device ASR. In the following let w N 1:= w1*, . . . , w*N be a word sequence. All NNLM architectures considered here follow a similar scheme. In each architecture a word embedding is followed by a FOFE layer (Zhang et al., 2015). Let α > 0 be the forgetting factor of the FOFE layer and em be the word embedding of word wm then zm := zm−1 + α · em generates the FOFE encoding. Afterwards an ngram context of the FOFE encoding is generated by concatenating n subsequent FOFE encodings for each position: zm−n+1*, . . . , z*m. Next, this context is flattened and passed to the hidden layers. The baseline FOFE NNLM shown in Figure 3a applies a stack of feed-forward layers to the flattened FOFE n-gram context. The output of the last feed-forward layer is fed to a projection layer for dimension reduction before the final softmax layer. This architecture is used for the AS-NNLM, where each application has its own NNLM, as well as for the Application-Agnostic (AA) NNLM, which is trained on balanced data for both applications. Figure 3b shows the mixture NNLM, which has M parallel sub-networks and a mixture subnetwork. Each sub-network is a stack of feedforward layers. The mixture sub-network is also a stack of feed-forward layers which finish with a softmax output of dimension M to produce mixture weights for each of the parallel sub-networks, similarly to (Oualil and Klakow, 2017; Irie et al., 2018; Zhou et al., 2022) except that the mixture combines FOFE networks. The subsequent layer averages the output of all parallel sub-networks scaled by the corresponding weights of the mixture sub-network softmax output. Figure 3c shows the Application-Dependent NNLM (AD-NNLM). This architecture uses the application information to train a NNLM in a multitask style. This NNLM has a separate sub-network and softmax output biases for each application. For training we follow a multi-task approach. The information of the application for each data sample is known and used to select the sub-network and softmax output bias corresponding to the application and only back-propagate through a part of the NNLM. At inference time, data are forwarded through the corresponding sub-network and softmax output bias belonging to the active application. A word-level NNLM holds the majority of parameters in the embedding. Therefore, the disk size for the mixture and AD-NNLM should increase slightly compared to the baseline architecture. Also the AD-NNLM speed should not increase since it is equivalent to the baseline architecture at inference time. ## 5 Experimental Setup The training data of our LMs consists of different data sources: anonymized and randomly sampled user requests from both VA and STT that are manually or automatically transcribed, along with synthetic tail-focused datasets. For the latter, we sample from domain-dependent templates and lists of entities that can fill those slots, both of which are derived from real user data. As mentioned in the introduction, we train NNLMs for three languages: US English, German and Mandarin Chinese. For our NNLMs, we obtain weights according to the method described in Section 3. For the AS models we sample 6B words while for the AA and AD models we sample 12B words. We run Bayesian hyperparameter optimization and select the final values based on optimal size-accuracy trade off. As a result, the models have a slightly different number of parameters, but we show in section 6 that this does not impact results noticeably. All models have 4 feed-forward layers and an embedding size of 256 - we tie the input and output embedding weights to reduce disk size (Press and Wolf, 2017). The hidden size is 768 for the base FOFE model, 512 for the AD FOFE and mixture FOFE and 256 for the Transformer. The Transformer has the same configuration as Vaswani et al. (2017) and uses 4 attention heads of size 256. We use the top 100k most frequent words as vocabulary. To speed up training, we use Noise Contrastive Estimation (NCE) (Liza and Grzes, 2017) which is replaced by softmax during inference. We train our NNLMs with Block Momentum Stochastic Gradient Descent (Chen and Huo, 2016) with an initial learning rate of 0.256 for AS, AA and AD FOFE and 1.024 for AA Mixture FOFE. For AS models the optimization converges after 64 epochs while for AA and AD models the optimum is delayed to 128 epochs. We keep the initial learning rate fixed for 16 epochs for AS and 64 epochs for the other models and apply a learning rate decay of 0.7 if the heldout perplexity increases for 4 epochs. To stabilize the training a clip norm of 6.0 is applied and the number of NCE samples is set to 4096. For evaluation, we test on three types of test sets: (1) VA and (2) STT, which consist of user requests sampled according to the distribution that we observe in our VA/STT and thus contain many head queries, and (3) Tail, which is designed to focus on queries with tail entities. Since these do not occur often in our user data, Tail consists of synthetic requests sampled from the same templates and entity lists that generate the synthetic training data. The requests cover a wide variety of domains such as music, sports and home automation and the audio is generated using Text-to-Speech. Table 1 shows the number of words in each test set. | VA | STT | Tail | | |----------|-------|--------|------| | English | 226k | 292k | 454k | | German | 130k | 154k | 204k | | Mandarin | 221k | 219k | 368k | We evaluate the accuracy of our models using Word Error Rate (WER) and latency using P95 realtime factor (RTF). If y is the duration of the audio signal and x the time it takes to decode y, RTF is defined as x/y. P95 refers to the 95th percentile and thus captures the latency of the most difficult queries. We run each test three times and average the RTF numbers to capture outliers. The ASR system uses a deep convolutional neural network acoustic model (AM) as described in (Huang et al., 2020; Pratap et al., 2020). For the AS models, we decode the VA and Tail test sets with a VA-specific NNLM and the STT test sets with a STT-specific NNLM. During decoding, the context length of the NNLMs is limited to 8 words to meet the memory and latency contraints of ondevice ASR. We perform a single decoding pass, combining the AM scores with the NNLM scores using optimized weights. We can achieve better WERs by interpolating the NNLM with an n-gram LM trained on tail data and by adding a rescoring pass, but since we want to compare the impact of using different neural architectures, we remove any factors that might obscure that comparison. ## 6 Results We first evaluate the accuracy of the different neural architectures. Table 2 reports the WER for different models on the VA, STT and Tail test sets, along with the number of parameters of the model to give an estimate of the size on disk. Note that for the AS FOFE models, we have twice as many parameters as the AA FOFE models because we train two separate models, one for VA+Tail and one for STT. We first observe that moving from AS to AA FOFE and thus reducing the number of parameters by half gives in some cases 1.5-3.8% WER degradation. Secondly, even though the Transformer architectures have been optimized using Bayesian optimization similar to the FOFE-based models, they give mixed results. For English VA and STT we observe WER improvements while for all other setups we see large degradations. The AD FOFE model gives the best accuracy LM #Par VA STT Tail English AS FOFE 58M 4.02 3.68 **17.48** AA FOFE 29M 4.11 3.68 17.78 AA Transf. 27M **3.99 3.56** 47.56 AA M-FOFE 37M **3.99 3.56** 17.53 AD FOFE 31M **3.99** 3.62 17.51 German AS FOFE 58M 5.32 6.47 **29.46** AA FOFE 29M 5.32 6.35 29.93 AA Transf. 27M 11.76 23.34 34.42 AA M-FOFE 37M 5.29 **6.26** 30.37 AD FOFE 31M **5.25** 6.33 32.36 Mandarin AS FOFE 58M 5.17 6.04 39.96 AA FOFE 29M 5.25 6.27 38.84 AA Transf. 27M 8.88 13.29 40.66 AA M-FOFE 37M 5.13 **5.94** 38.16 AD FOFE 31M **5.12** 6.05 **36.41** Mandarin (equal number of parameters) AS FOFE 68M 5.14 6.00 39.45 AA FOFE 34M 5.26 6.27 38.68 AA Transf. 34M 9.03 13.40 40.38 AA M-FOFE 34M **5.10** 6.02 38.54 AD FOFE 34M 5.12 **5.98 36.48** on VA for all languages, while the AA Mixture FOFE gives the best accuracy on STT, but the differences between the two architectures are small. They outperform the baseline AS/AA FOFE and Transformer models in almost all cases. The only exception are the English and German Tail test sets: the AS FOFE models still achieve the best accuracy, probably because infrequent queries benefit the most from doubling the number of parameters. As explained in Section 5, we choose hyperparameters based on the optimal accuracy-size trade off. As a result, the number of parameters of the models at the top of Table 2 are not exactly the same. To ensure that the small size differences do not impact the results significantly, we evaluated results for Mandarin models that all have 34M parameters each and added the results at the bottom of Table 2. We observe the same trends: the AD FOFE and AA Mixture FOFE give the best results. We confirm that increasing the number of parameters does not lead to better results. Finally, we report the relative change in P95 RTF (P50 RTF showed the same trend) compared to the baseline AA FOFE model in Table 3. Since RTF is hardware-dependent, we mostly care about relative changes compared to the baseline. We observe that both the Transformer and the Mixture FOFE are significantly slower than the baseline. For the English test sets, the Transformer is faster than the Mixture FOFE, while for German and Mandarin speed depends on the test set. The AD FOFE gives the fastest inference speed of the proposed models and even outperforms the vanilla FOFE on English VA and all German test sets, while keeping the degradation limited in the other setups. | LM | VA | STT | Tail | |---------------------|--------|--------|--------| | English AA Transf. | -18.00 | -21.75 | -11.30 | | AA M-FOFE | -23.79 | -31.54 | -17.95 | | AD FOFE | 7.40 | -8.04 | 4.66 | | German AA Transf. | -19.59 | -13.92 | -24.85 | | AA M-FOFE | -17.77 | -31.45 | -79.83 | | AD FOFE | 7.84 | 3.41 | 5.58 | | Mandarin AA Transf. | -10.06 | -14.04 | -8.90 | | AA M-FOFE | -9.89 | -30.21 | -36.23 | | AD FOFE | -2.11 | 1.63 | -3.83 | ## 7 Conclusion We aim to develop a single NNLM that can serve both VA and STT requests with the same accuracy and speed as application-specific NNLMs, while reducing the disk size approximately by half. We develop a method to optimally balance the data of the VA and STT applications, and propose two novel FOFE feed-forward architectures. The Application-Agnostic Mixture FOFE and the Application-Dependent FOFE both outperform the baseline FOFE and Transformer models in terms of accuracy, and the latter is also competitive in terms of latency. ## Limitations The two proposed models (AD FOFE and AA FOFE Mixture) have been tested on more languages than the ones mentioned in this paper, but the comparison with Transformer models has not been done for every language. This paper only uses word-level LMs. We have done preliminary experiments with subword-level LMs but more extensive investigation is needed to draw proper conclusions. ## Ethics Statement This paper focuses on the LM of a real-world VA and as such the results cannot be exactly reproduced: we are not aware of any public dataset that mimics our setup, e.g. ASR that can serve both VA and STT applications, training data in several languages that exceeds 6B words along with test sets of several hundreds of thousands of words sampled from real user data, etc. All data have been anonymized and randomly sampled, and human transcription to create the test sets is only performed from opted-in user data. ## References Rangachari Anand, Kishan G. Mehrotra, Chilukuri K. Mohan, and Sanjay Ranka. 1993. An improved algorithm for neural network classification of imbalanced training sets. *IEEE Transactions on Neural Networks*, 4(6):962–969. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. In *arXiv preprint* arXiv:1607.06450. Michiel Bacchiani, Michael Riley, Brian Roark, and Richard Sproat. 2006. MAP adaptation of stochastic grammars. *Computer Speech and Language*, 20:41– 68. Ricardo Barandela, Rosa M. Valdovinos, J. Salvador Sánchez, and Francesc J. Ferri. 2004. The Imbalanced Training Sample Problem: Under or over Sampling? In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 806–814. Eugen Beck, Ralf Schlüter, and Hermann Ney. 2020. LVCSR with Transformer Language Models. In *Proceedings Interspeech*, pages 1798–1802. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. *Journal of Machine Learning Research*, 3:1137–1155. Yoshua Bengio, Paolo Frasconi, and Patrice Simard. 1993. The problem of learning long-term dependencies in recurrent networks. In *Proceedings of the* IEEE International Conference on Neural Networks, pages 1183–1195. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the Annual International Conference on Machine Learning (ICML), volume 382, pages 41–48. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-sampling Technique. *Journal of Artificial Intelligence Research*, 16:321–357. Kai Chen and Qiang Huo. 2016. Scalable Training of Deep Learning Machines by Incremental Block Training with Intra-block Parallel Optimization and Blockwise Model-Update Filtering. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5880– 5884. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In *Proceedings* of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734. M.A. Ganaie, Minghui Hu, A.K. Malik, M. Tanveer, and P.N. Suganthan. 2022. Ensemble Deep Learning: A Review. Engineering Applications of Artificial Intelligence, 115. Bo-June Hsu. 2007. Generalized linear interpolation of language models. In IEEE Workshop on Automatic Speech Recognition Understanding (ASRU), pages 136–140. Zhen Huang, Tim Ng, Leo Liu, Henry Mason, Xiaodan Zhuang, and Daben Liu. 2020. SNDCNN: SelfNormalizing Deep CNNs with Scaled Exponential Linear Units for Speech Recognition. In *Proceedings* of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6854– 6858. Kazuki Irie. 2020. *Advancing neural language modeling in automatic speech recognition.* Ph.D. thesis, RWTH Aachen University, Germany. Kazuki Irie, Shankar Kumar, Michael Nirschl, and Hank Liao. 2018. RADMM: Recurrent Adaptive Mixture Model with Applications to Domain Robust Language Modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6079–6083. Kazuki Irie, Albert Zeyer, Ralf Schlüter, and Hermann Ney. 2019. Language Modeling with Deep Transformers. In *Proceedings Interspeech*, pages 3905– 3909. Justin M. Johnson and Taghi M. Khoshgoftaar. 2019. Survey on deep learning with class imbalance. *Journal of Big Data*, 6. Miroslav Kubat and Stan Matwin. 1997. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. In Proceedings of the International Conference on Machine Learning (ICML), pages 179–186. Farhana Ferdousi Liza and Marek Grzes. 2017. Improving Language Modelling with Noise Contrastive Estimation. In *AAAI Conference on Artificial Intelligence*. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Cernocký, and Sanjeev Khudanpur. 2010. Recurrent ˇ neural network based language model. In *Proceedings Interspeech*, pages 1045–1048. Youssef Oualil and Dietrich Klakow. 2017. A neural network approach for mixing language models. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5710–5714. Razvan Pascanu, Tomáš Mikolov, and Yoshua Bengio. 2013. On the difficulty of training Recurrent Neural Networks. In *Proceedings of the International Conference on Machine Learning (ICML)*, pages 1310– 1318. Vineel Pratap, Qiantong Xu, Jacob Kahn, Gilad Avidov, Tatiana Likhomanenko, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, and Ronan Collobert. 2020. Scaling Up Online Speech Recognition Using ConvNets. In *Proc. Interspeech 2020*, pages 3376– 3380. Ofir Press and Lior Wolf. 2017. Using the Output Embedding to Improve Language Models. In *Proceedings of the Conference of the European Chapter of the* Association for Computational Linguistics (EACL), pages 157–163. Ernest Pusateri, Christophe Van Gysel, Rami Botros, Sameer Badaskar, Mirko Hannemann, Youssef Oualil, and Ilya Oparin. 2019. Connecting and comparing language model interpolation techniques. In Proceedings Interspeech, pages 3500–3504. Anirudh Raju, Denis Filimonov, Gautam Tiwari, Guitang Lan, and Ariya Rastrow. 2019. Scalable Multi Corpora Neural Language Models for ASR. In *Proceedings Interspeech 2019*, pages 3910–3914. Holger Schwenk and Jean-Luc Gauvain. 2002. Connectionist language modeling for large vocabulary continuous speech recognition. In *Proceedings of the* IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 765–768. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of- experts layer. In *International Conference on Learning Representations* (ICLR). Yangyang Shi, Martha A. Larson, and Catholijn M. Jonker. 2015. Recurrent neural network language model adaptation with curriculum learning. *Computer Speech and Language*, 33(1):136–154. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills. In *Proceedings of the Annual Meeting of the Association for Computational* Linguistics (ACL), pages 2021–2030. Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. LSTM Neural Networks for Language Modeling. In *Proceedings Interspeech*, pages 194–197. Jason Van Hulse, Taghi M. Khoshgoftaar, and Amri Napolitano. 2007. Experimental perspectives on learning from imbalanced data. In *Proceedings of* the International Conference on Machine Learning (ICML), pages 935–942. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS). Yujie Xing, Jinglun Cai, Nils Barlaug, Peng Liu, and Jon Atle Gulla. 2022. Balancing Multi-Domain Corpora Learning for Open Domain Response Generation. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 2104–2120. Jianping Zhang and Inderjeet Mani. 2003. kNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction. In *Proceedings of the Workshop on Learning from Imbalanced* Data Sets, pages 1–7. ICML. Shiliang Zhang, Hui Jiang, Mingbin Xu, Junfeng Hou, and Lirong Dai. 2015. The Fixed-Size OrdinallyForgetting Encoding Method for Neural Network Language Models. In *Proceedings of the Annual* Meeting of the Association for Computational Linguistics (ACL), pages 495–500. Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Y. Zhao, Andrew M. Dai, Zhifeng Chen, Quoc Le, and James Laudon. 2022. Mixtureof-experts with expert choice routing. *CoRR*, abs/2202.09368.
goyal-garera-2023-building
Building Accurate Low Latency {ASR} for Streaming Voice Search in {E}-commerce
https://aclanthology.org/2023.acl-industry.26
Automatic Speech Recognition (ASR) is essential for any voice-based application. The streaming capability of ASR becomes necessary to provide immediate feedback to the user in applications like Voice Search. LSTM/RNN and CTC based ASR systems are very simple to train and deploy for low latency streaming applications but have lower accuracy when compared to the state-of-the-art models. In this work, we build accurate LSTM, attention and CTC based streaming ASR models for large-scale Hinglish (blend of Hindi and English) Voice Search. We evaluate how various modifications in vanilla LSTM training improve the system{'}s accuracy while preserving the streaming capabilities. We also discuss a simple integration of end-of-speech (EOS) detection with CTC models, which helps reduce the overall search latency. Our model achieves a word error rate (WER) of 3.69{\%} without EOS and 4.78{\%} with EOS, with {\textasciitilde}1300 ms ({\textasciitilde}46.64{\%}) reduction in latency.
# Building Accurate Low Latency Asr For Streaming Voice Search Abhinav Goyal, Nikesh Garera Flipkart {abhinav.goyal,nikesh.garera}@flipkart.com ## Abstract Automatic Speech Recognition (ASR) plays a crucial role in voice-based applications. For applications requiring real-time feedback like Voice Search, streaming capability becomes vital. While LSTM/RNN and CTC based ASR systems are commonly employed for lowlatency streaming applications, they often exhibit lower accuracy compared to state-of-theart models due to a lack of future audio frames. In this work, we focus on developing accurate LSTM, attention, and CTC based streaming ASR models for large-scale Hinglish (a blend of Hindi and English) Voice Search. We investigate various modifications in vanilla LSTM training which enhance the system's accuracy while preserving its streaming capabilities. We also address the critical requirement of end-ofspeech (EOS) detection in streaming applications. We present a simple training and inference strategy for end-to-end CTC models that enables joint ASR and EOS detection. The evaluation of our model on Flipkart's Voice Search, which handles substantial traffic of approximately 6 million queries per day, demonstrates significant performance gains over the vanilla LSTM-CTC model. Our model achieves a word error rate (WER) of 3.69% without EOS and 4.78% with EOS while also reducing the search latency by approximately ∼1300 ms (equivalent to 46.64% reduction) when compared to an independent voice activity detection (VAD) model. 1 Introduction As an e-commerce platform in India, we need to cater to a variety of user bases, and a big part of that consists of users who cannot or do not want to type while interacting with the app, e.g., while searching for a product. For such users, interaction via a voice-based interface becomes an essential feature requiring an accurate and efficient Automatic Speech Recognition (ASR) system. Recent years have witnessed the popularity of end-to-end ASR models, which have achieved state-of-the-art results (Li et al., 2022). These models offer simplified training and inference processes and have demonstrated higher accuracy compared to traditional pipelines with separate acoustic, pronunciation, and language models. Common approaches for end-to-end ASR models include CTC (Connectionist Temporal Classification), AED (Attention-based Encoder-Decoder), and RNNT (RNN-Transducer) (Graves et al., 2006; Chan et al., 2016; Graves et al., 2013). However, streaming capability plays a pivotal role in choosing the most suitable ASR model. While non-streaming models can leverage the entire audio for text inference, streaming models have access only to past context, which can result in reduced accuracy. Nevertheless, streaming models provide immediate feedback, a critical requirement for consumer-facing applications like Voice Search. Additionally, low inference latency is essential to ensure a user-friendly experience, as delayed feedback can adversely impact usability. Another challenge in streaming ASR applications is accurately detecting the end of speech (EOS). Conventional methods rely on standalone Voice Activity Detection (VAD) models, which operate independently from the ASR system and may not offer optimal accuracy. In this work, we focus on developing a streaming ASR system for large-scale Hinglish Voice Search. Our objective is to enhance accuracy and reduce latency while preserving streaming capabilities. Specifically, we propose modifications to an LSTM and CTC based ASR system, aiming to bridge the gap between streaming and nonstreaming ASR models. We also present a simple training and inference strategy that enables joint ASR and EOS detection within end-to-end CTC models, effectively reducing user-perceived latency in voice search. The contributions of this research can be summarized as follows: - Development of an accurate and efficient 276 streaming ASR model based on LSTM, MHA (Multi-Head Attention), and CTC for Hinglish Voice Search; - Introduction of a straightforward training and inference strategy to enable joint ASR and EOS detection within end-to-end CTC models, addressing the need for accurate EOS detection in streaming applications. - Analysis of the impact of model modifications on reducing the performance gap between streaming and non-streaming ASR models. Next, we discuss some related work in Section 2. Section 3 describe the model architecture we use, EOS integration and the inference method. We talk about the dataset and experimental setup in Section 4. Finally, we conclude with a discussion on results and limitations in Section 5. ## 2 Related Work CTC, the first E2E approach developed for ASR (Graves et al., 2006), has been widely used over the last few years (Soltau et al., 2016; Li et al., 2018). Although it provides simplicity, it makes a conditional independence assumption, that output token at any time doesn't depend on past tokens, which can make it sub-optimal. AED and RNNT models relax this assumption by leveraging past output tokens. While AED models like LAS (Listen, Attend and Spell) (Chan et al., 2016) work very well for non-streaming tasks, they require complex training strategies for streaming scenarios (Raffel et al., 2017; Chiu and Raffel, 2017). RNNT (Graves et al., 2013) provides a natural alternative in streaming scenarios but has high training complexity and inference latency rendering it difficult to use in a real-world setting without complex optimizations/modifications (Li et al., 2019; Mahadeokar et al., 2021). There have been many attempts to improve the accuracy of CTC models that preserve their training and inference simplicity. Fernández et al. (2007) leverages hierarchical structure in the speech by adding auxiliary losses to train a CTC-based acoustic-to-subword model. Their hierarchical CTC (HCTC) model predicts different text segmentations in a fine-to-coarse fashion. Recent studies have explored the use of attention in CTC models to implicitly relax the conditional independence assumption by enriching the features using other time frames. Das et al. (2018) uses component attention and implicit language model to enrich the context while Salazar et al. (2019) evaluates a fully self-attention-based network with CTC. In this work, we explore how augmenting an LSTMbased network with windowed self-attention can help improve the transcription while preserving streaming capability. Another line of work in improving the output of streaming models is the second pass rescoring that uses an additional (usually non-streaming) component to re-rank the streaming model's hypotheses (Sainath et al., 2020). While we also rescore the candidate hypotheses at the last step, our system doesn't employ any external acoustic model to do so and leverages the hierarchical losses that are part of the model itself. For addressing EOS detection, conventional approaches use VAD models with a threshold on silence amount. This may lead to early termination of user speech. Shannon et al. (2017) addresses this by training an EOQ (End-of-Query) classifier which performs better than VAD but is still optimized independent of the ASR system. VAD based on output CTC labels has also been explored to detect EOS based on the length of non-speech (blank) region (Yoshimura et al., 2020). Li et al. (2020) jointly train an RNNT model for EOS detection by using and extra *< /s >* token with early and late penalties. Prediction of *< /s >* token by the model during inference marks as the signal for EOS. We follow a similar approach where we train the model with early and late penalties. During inference, we use a dynamic threshold on *< /s >* probability to detect the endpoint before decoding the text. ## 3 Methodology 3.1 Model Architecture Inspired by Fernández et al. (2007), we build a 3-level HCTC architecture based on LSTM and attention as shown in Fig. 1. Going in a fine-tocourse fashion, the model predicts characters (73 tokens), short subwords (300 tokens) and long subwords (5000 tokens) at the respective levels. Each level consists of an N-layer LSTM-attention block (N being 5, 5 and 2) followed by a linear softmax layer. A time convolution layer with a kernel size of 5 and a stride of 3 after the second level reduces the number of time steps to one-third. This helps emit longer subwords at the third level by increasing the context and receptive field of a time frame. ![2_image_0.png](2_image_0.png) Along with the HCTC loss, we use label smoothing (Szegedy et al., 2016) by adding a negative entropy term to it. This mitigates overconfidence in output distributions leading to improved transcription. Mathematically, the loss for a given training sample, (*x, y*) = (x, {ychar, ys300, ys5k}), is: $$L(x,y)=\sum_{k}\left[CTCLoss(x,y_{k})\right.$$ $$\left.-\lambda\sum_{t}Entropy(P_{k}(:|x_{t}))\right]$$ $$=\sum_{k}\left[\ -log(P(y_{k}|x))\right.$$ $$\left.+\lambda\sum_{t,v}P_{k}(v|x)log(P_{k}(v|x))\right]$$ For an N-layer LSTM-attention block (Fig. 2), we stack N LSTM layers with 700 hidden dimensions which are followed by a dot-product based multi-headed self-attention layer (MHA) (Vaswani et al., 2017). We use 8 attention heads and project the input to 64-dimensional key, query and value vectors for each head. We project back the 512 (8x64) dimensional output to 700 dimensions and pass it through a linear layer with ReLU activation. To retain the model's streaming capabilities, we restrict the attention to a 5-frame window (t±2) instead of complete input i.e., for input features ft, we use Q(ft) as the query vector and K(ft−2:t+2), V (ft−2:t+2) as key-value vectors where Q,K and V are linear projections. To improve the gradient ![2_image_1.png](2_image_1.png) flow, we add a skip connection and layer normalization after each layer. We use 80 filterbanks from standard log-melspectrogram as inputs, computed with a window of 20ms, a stride of 10ms, and an FFT size of 512. To prevent overfitting, we use time-frequency masking (Park et al., 2019) during training. We also stack five adjacent frames with a stride of three, giving an input feature vector of 400 dimensions with a receptive field of 60ms and stride of 30ms for each time step. Windowed MHA and time convolution increase overall receptive field and stride to 780ms and 90ms resp. Consequently, our model has a forward lookahead of 390ms when deployed in a streaming mode. ## 3.2 Speech End-Pointing Once we have a trained ASR model, we augment the vocabulary with an additional *< /s >* token and use forced alignment to get the ground truth speech endpoints. We use the output from 1st (character) level of the ASR model for alignment as it has the least lookahead and empirically works better than the output from other blocks. We append the extra *< /s >* token at the end of each transcript and add early-late (EL) penalties (Li et al., 2020) to the training loss to fine-tune the model for a few more iterations. EL penalties penalize the model for predicting *< /s >* too early or too late. During online inference, we determine if the current time step (t) is the speech endpoint by evaluating the following conditions: - There is at least one word in output text - to avoid termination before the user starts speaking; - *< /s >* is the most probable token among all vocab items i.e., Pt(*< /s >*) ≥ Pt(:) - call this an EOS peak; - Pt(< /s >) ≥ *threshold*t = α 1+nt/β where ntis the number of EOS peaks before time t. Thus, the earliest time step satisfying the above conditions is the EOS. Here α controls the aggressiveness of EOS detection as decreasing α decreases the EOS threshold for all time steps resulting in an earlier EOS signal. Empirically, we observe that the model gives a lower probability to *< /s >* token after each EOS peak. To address this, we add an nt/β term that gradually reduces the threshold whenever an EOS peak appears, giving an additional (but marginal) reduction in latency. For audios where the above conditions are never satisfied, a combination of a small independent VAD model and a maximum time limit works as a backup. ## 3.3 Decoding And Re-Scoring For each chunk of input audio stream, we use prefix beam search, with a beam size of 1000 hypotheses, to decode the text from probability distribution given by the last (subword 5000) level. We use the same probability distribution to detect EOS as well. When we observe an EOS or the stream ends, a 5gram KenLM and HCTC loss (sum of CTC losses from all levels) are used to re-rank and select the best hypothesis from the top 100 candidates. We use grid search to find the weights of the scores. ## 4 Dataset And Training Setup Queries from E-commerce Voice Search are our primary source of data. We also collect speech from other sources like on-call customer support, crowdsourced read-speech, etc., to augment training data. We transcribe all the utterances, except read-speech, using an existing ASR system and manually correct them. The ASR system that generates reference transcripts progressively improves as part of model iterations. Collectively, the training datasets amount to ∼14M audio-text pairs (8M from the target domain and 6M from other) or roughly 22.5k hours of audio. For evaluation, we randomly sample ∼19k audios from e-commerce voice search queries, transcribe it manually (without any reference text) and reduce the human error by using multiple iterations of verification. We categorize the test set into clean and noisy subsets, containing ∼16k and ∼3k samples resp. Clean utterances are audios where only one speaker's speech is intelligible. Noisy utterances are those where more than one speaker has intelligible speech (overlapping or non-overlapping). In noisy utterances, the primary speaker is the user whose utterance is more relevant for the ecommerce voice search application. Note that clean utterances may also have non-intelligible secondary speakers. We train and evaluate the model to transcribe only the primary speaker's speech while ignoring the rest. For training KenLM and Sentencepiece models, we use a large corpus comprising text from various sources like transcribed voice search queries and on-call customer support queries, customer support chatbot queries, and product catalogues. We use a cyclical learning rate (LR) (Smith, 2017) with Adam optimizer to train the ASR model for 200k iterations with a batch size of ∼55 minutes. Training the model on two A100 (40 GB) GPUs takes ∼50 hours. For EOS detection, we fine-tune the model with EL penalties for an additional 48k iterations (∼12 hours). 5 Results and Discussion ![3_image_0.png](3_image_0.png) We report WER and mean EOS latency on the test set for evaluating the performance of our model in Table 1. We get the best results when the model is first pre-trained on all the data and then finetuned on the target domain, followed by fine-tuning with EL penalties. To see how α and β affect the results, we do a sweep over both the parameters and plot mean EOS latency vs %WER in Fig. 3. For fur- | Model | %WER | Mean EOS Latency | | | | | |------------------------------------|--------|--------------------|-------|-------|-------|------| | all | clean | noisy | all | clean | noisy | | | Google Speech-to-Text API* | 13.14 | 12.62 | 16.30 | | | | | LSTM-attention HCTC (all data) | 4.03 | 3.11 | 9.52 | 2858 | 2457 | 5080 | | + fine-tune on target domain | 3.75 | 3.03 | 8.02 | 2858 | 2457 | 5080 | | + EL penalty (our best model) | 3.69 | 2.95 | 8.12 | 2858 | 2457 | 5080 | | + EOS detection (without n/β term) | 4.77 | 4.10 | 8.75 | 1565 | 1268 | 3215 | | + EOS detection (with n/β term) | 4.78 | 4.19 | 8.32 | 1525 | 1242 | 3096 | | Reduction in Latency | 1333 | 1215 | 1985 | | | | ther analysis, we consider the point with α = 0.8 and β = 2.0 that gives us a WER of 4.78% and a reduction of 1333 ms in mean EOS latency with EOS coverage (fraction of audios receiving an EOS signal) of 64.13%. Our model performs significantly better than Google Speech-to-Text API, which is expected since Google's API is trained for the open domain, but our data is in the e-commerce domain. The evaluation utterances also have a lot of noise which our model is more robust to as it is trained on similar data. | Model | %WER | ∆WER | |-----------------------|--------|--------| | LSTM-attention HCTC | 5.37 | | | - Windowed MHA | 5.94 | 9.60% | | - HCTC rescoring | 6.19 | 4.04% | | - HCTC loss | 6.62 | 6.50% | | - Skip connections | 7.68 | 13.80% | | (= Baseline LSTM CTC) | | | Table 2: Change in WER when each component is removed. All results are with LM rescoring using the same KenLM. To understand how modifications in the architecture contribute to improving the accuracy of the vanilla LSTM CTC model, we conduct an ablation study and report the WER in Table 2. We train these models for 200k iterations on a reduced dataset of ∼5500 hours sampled from the target domain. As seen from the table, windowed MHA improves the WER by 9.6%. Intuitively, the improvement comes from an increased receptive field (780ms with vs 180ms without attention) and the ability to extract better context from neighbouring frames using self-attention. HCTC loss forces the model to learn hierarchical structure in the speech at multiple levels - from characters to short subwords and then long subwords. The model can then utilize this structure to achieve more accurate predictions. Adding auxiliary losses at intermediate levels helps the convergence as well. The hierarchical loss also facilitates the rescoring since the combination of losses acts like an ensemble of ranking models. Together, HCTC loss and rescoring give a relative improvement of 10.28%. Finally, skip connections improve the gradient flow in training, which further helps the convergence, improving the WER by 13.80%. These modifications, when combined, result in a significant total relative improvement of ∼30% in WER over the baseline. ## 5.1 Comparison With Other Models In addition to the baseline LSTM CTC (Table 2), we also compare our model with a non-streaming BiLSTM version, and a streaming Conformer CTC inspired by (Li et al., 2021). For Conformer CTC, we use the causal encoder-only network and train it using CTC loss. As evident from the results in Table 4, the discussed modifications help bridge the gap between streaming LSTM and non-streaming BiLSTM CTC models. The streaming Conformer CTC also performs only marginally better than our LSTM-attention HCTC model while it has much higher training complexity and inference latency. We evaluate a bidirectional version of our model to analyse the consistency of these improvements. Observe that the same modifications improve the BiLSTM CTC model by a relative 13.4%, vs 30% the LSTM CTC model because BiLSTM already has access to full future context, limiting the scope of improvement. Even then, it performs significantly better than a vanilla BiLSTM CTC model and only slightly worse than a Transformer AED+CTC model (Nakatani, 2019). Thus, these modifications also reduce the gap between | Ground truth | ASR output | Reason for error | |--------------------------------|-----------------------------|---------------------------------------| | no search impact mixer machine | mixture machine | wrong pronunciation | | ooni kapda | baby ooni kapda | multiple speakers with similar voice | | sasta sasta mobile vivo ka | sasta mobile vivo ka | overlapping speakers | | choli photos choli photos | choli photos | repetition after EOS | | chappal slipper | chappal | repetition after EOS (in other lang.) | | -ve search impact great cycle | grey cycle | background noise | | capacitor | cap sitter | wrong pronunciation | | earing | car earing | multiple speakers with similar voice | | atlas three chaubis inch | headlight three chaubis pin | overlapping speakers | | joota | guitar | two eligible primary speakers | | oppo a thirty three back cover | oppo a thirty three | additional information after EOS | Table 3: Examples of different types of errors. The upper section of the table shows examples where the mistakes don't have any search impact, and the lower section shows the ones having a negative effect. Red indicates incorrect words and insertion errors, and orange indicates deletions and correct counterparts of erroneous words. | Model | %WER | | | |---------------------------------|--------|-------|-------| | all | clean | noisy | | | Streaming models LSTM CTC | 7.68 | 6.50 | 14.70 | | LSTM-atten. HCTC | 5.37 | 4.57 | 10.12 | | Str. Conformer CTC | 5.30 | 4.33 | 11.01 | | Non-streaming models BiLSTM CTC | 5.37 | 4.64 | 9.68 | | BiLSTM-atten. HCTC | 4.65 | 3.97 | 8.69 | | Transf. AED+CTC | 4.37 | 3.30 | 10.72 | | (Nakatani, 2019) | | | | Table 4: Our model in comparison with others (details in Sec. 5.1). All models are made similar in size and trained on ∼5500 hours. LSTM and transformer-based ASR models for voice search in both - streaming and non-streaming settings. One explanation could be that transformers usually have an advantage in capturing longterm dependencies. This doesn't help as much for speech recognition on short utterances as in our dataset, where audios usually are 4-6 seconds long with an average of 3.34 spoken words. For a fair comparison, we ensure all models are similar in size and use the same KenLM for rescoring. ## 5.2 Error Analysis To understand the errors better, we analyze 50 random utterances each from clean and noisy subsets where the model makes mistakes. The most common reasons for errors in the clean subset are - wrong pronunciation and background noise. For noisy utterances, multiple speakers with a similar voice, overlapping speakers, and more than one eligible primary speakers contribute to additional errors. Table 3 lists some examples demonstrating these reasons. We also observe that around 62% of the mistakes in the evaluation set have no negative impact on search. In these cases, the errors are usually in stop words or produce a variant of the reference word which can be used, like singular vs plural or the same word with a different spelling. When using EOS detection, there are additional errors due to early termination in 2.24% of the utterances. In all such cases, EOS is detected prematurely because of a pause in the speech. Usually, after this pause, the user repeats their query, adds more information, or corrects it. In around 47% of the cases, not capturing this additional speech has no negative impact on search. In the rest 53% cases, i.e. 1.19% of the total samples, the missed utterance usually has more information about the query, added by the user, that could have helped in refining the search results. ## 5.3 Conclusions This work focuses on developing a robust and efficient streaming ASR model for Hinglish Voice Search. We achieve this by utilizing an LSTMattention architecture and employing the HCTC loss. We explore architectural modifications that help bridge the accuracy gap between streaming and non-streaming LSTM-based ASR models. Our proposed model performs on par with a streaming conformer-based system but offers the advantage of lower latency. Additionally, we present a straightforward method to integrate Endof-Speech (EOS) detection with CTC-based models, requiring only a small number of additional training iterations and utilizing simple thresholding during inference. The simplicity and low latency of our model contribute to a fast and accurate voice search experience, making it an appealing solution for practical applications. ## Limitations And Future Work In our study, we focused on a high-resource setting with access to approximately 22.5k hours of labeled speech data. While we compared our models with conformer and transformer-based AED and CTC models, we did not include RNNT models due to their higher compute resource requirements. To accommodate deployment constraints, we employed a smaller model with approximately 60 million parameters, which limited its performance. Moving forward, our future work aims to explore the potential benefits of leveraging large unsupervised datasets and larger models to further enhance our system and extend its applicability to other Indian languages, which typically have less available data compared to Hinglish. Building upon our previous success in adapting a non-streaming model for end-to-end speech-to-intent detection in customer support voicebots (Goyal et al., 2022), we are motivated to investigate the feasibility of developing a single joint model for Automatic Speech Recognition (ASR), End-of-Speech (EOS) detection, and Spoken Language Understanding (SLU). Additionally, we are keen on exploring the development of multilingual ASR models. ## References William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 4960–4964. IEEE. Chung-Cheng Chiu and Colin Raffel. 2017. Monotonic chunkwise attention. arXiv preprint arXiv:1712.05382. Amit Das, Jinyu Li, Rui Zhao, and Yifan Gong. 2018. Advancing connectionist temporal classification with attention modeling. In 2018 IEEE International conference on acoustics, speech and signal processing (ICASSP), pages 4769–4773. IEEE. Santiago Fernández, Alex Graves, and Jürgen Schmidhuber. 2007. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI 2007. Abhinav Goyal, Anupam Singh, and Nikesh Garera. 2022. End-to-end speech to intent prediction to improve E-commerce customer support voicebot in Hindi and English. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language* Processing: Industry Track, pages 579–586, Abu Dhabi, UAE. Association for Computational Linguistics. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In *Proceedings of the* 23rd international conference on Machine learning, pages 369–376. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In *2013 IEEE international* conference on acoustics, speech and signal processing, pages 6645–6649. Ieee. Bo Li, Shuo-yiin Chang, Tara N Sainath, Ruoming Pang, Yanzhang He, Trevor Strohman, and Yonghui Wu. 2020. Towards fast and accurate streaming end-toend asr. In *ICASSP 2020-2020 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6069–6073. IEEE. Bo Li, Anmol Gulati, Jiahui Yu, Tara N Sainath, ChungCheng Chiu, Arun Narayanan, Shuo-Yiin Chang, Ruoming Pang, Yanzhang He, James Qin, et al. 2021. A better and faster end-to-end model for streaming asr. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5634–5638. IEEE. Jinyu Li, Guoli Ye, Amit Das, Rui Zhao, and Yifan Gong. 2018. Advancing acoustic-to-word ctc model. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5794– 5798. IEEE. Jinyu Li, Rui Zhao, Hu Hu, and Yifan Gong. 2019. Improving rnn transducer modeling for end-to-end speech recognition. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 114–121. Jinyu Li et al. 2022. Recent advances in end-to-end automatic speech recognition. APSIPA Transactions on Signal and Information Processing, 11(1). Jay Mahadeokar, Yuan Shangguan, Duc Le, Gil Keren, Hang Su, Thong Le, Ching-Feng Yeh, Christian Fuegen, and Michael L Seltzer. 2021. Alignment restricted streaming recurrent neural network transducer. In *2021 IEEE Spoken Language Technology* Workshop (SLT), pages 52–59. IEEE. Tomohiro Nakatani. 2019. Improving transformerbased end-to-end speech recognition with connectionist temporal classification and language model integration. In *Proc. Interspeech*. Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. *arXiv* preprint arXiv:1904.08779. Colin Raffel, Minh-Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and lineartime attention by enforcing monotonic alignments. In *International Conference on Machine Learning*, pages 2837–2846. PMLR. Tara N Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruoming Pang, Antoine Bruguier, Shuo-yiin Chang, Wei Li, Raziel Alvarez, Zhifeng Chen, et al. 2020. A streaming on-device end-to-end model surpassing server-side conventional model quality and latency. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6059–6063. IEEE. Julian Salazar, Katrin Kirchhoff, and Zhiheng Huang. 2019. Self-attention networks for connectionist temporal classification in speech recognition. In Icassp 2019-2019 ieee international conference on acoustics, speech and signal processing (icassp), pages 7115–7119. IEEE. Matt Shannon, Gabor Simko, Shuo-Yiin Chang, and Carolina Parada. 2017. Improved end-of-query detection for streaming speech recognition. In *Interspeech*, pages 1909–1913. Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In *2017 IEEE winter conference* on applications of computer vision (WACV), pages 464–472. IEEE. Hagen Soltau, Hank Liao, and Hasim Sak. 2016. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. *arXiv preprint* arXiv:1610.09975. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Takenori Yoshimura, Tomoki Hayashi, Kazuya Takeda, and Shinji Watanabe. 2020. End-to-end automatic speech recognition integrated with ctc-based voice activity detection. In *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pages 6999–7003. IEEE.
san-etal-2023-plate
{PLA}t{E}: A Large-scale Dataset for List Page Web Extraction
https://aclanthology.org/2023.acl-industry.27
Recently, neural models have been leveraged to significantly improve the performance of information extraction from semi-structured websites. However, a barrier for continued progress is the small number of datasets large enough to train these models. In this work, we introduce the PLAtE (Pages of Lists Attribute Extraction) benchmark dataset as a challenging new web extraction task. PLAtE focuses on shopping data, specifically extractions from product review pages with multiple items encompassing the tasks of: (1) finding product list segmentation boundaries and (2) extracting attributes for each product. PLAtE is composed of 52,898 items collected from 6,694 pages and 156,014 attributes, making it the first large-scale list page web extraction dataset. We use a multi-stage approach to collect and annotate the dataset and adapt three state-of-the-art web extraction models to the two tasks comparing their strengths and weaknesses both quantitatively and qualitatively.
# Plate: A Large-Scale Dataset For List Page Web Extraction Aidan San∗ University of Virginia aws9xm@virginia.edu Yuan Zhuang∗ University of Utah yuan.zhuang@utah.edu Colin Lockard Amazon clockard@amazon.com Yangfeng Ji University of Virginia yangfeng@virginia.edu ## Abstract Recently, neural models have been leveraged to significantly improve the performance of information extraction from semi-structured websites. However, a barrier for continued progress is the small number of datasets large enough to train these models. In this work, we introduce the PLAtE (Pages of Lists Attribute Extraction) benchmark dataset as a challenging new web extraction task. PLAtE focuses on shopping data, specifically extractions from product review pages with multiple items encompassing the tasks of: (1) finding productlist segmentation boundaries and (2) extracting attributes for each product. PLAtE is composed of 52, 898 items collected from 6, 694 pages and 156, 014 attributes, making it the first largescale list page web extraction dataset. We use a multi-stage approach to collect and annotate the dataset and adapt three state-of-the-art web extraction models to the two tasks comparing their strengths and weaknesses both quantitatively and qualitatively. ## 1 Introduction Semi-structured data extraction, i.e., web extraction, is the task of extracting data found in templated text fields from HTML pages. Once extracted, the structured data can be utilized in various downstream tasks such as information retrieval, recommendation, and question answering. While recent work has shown the potential of neural approaches for web extraction (Lin et al., 2020; Zhou et al., 2021; Li et al., 2021), there are very few publicly available large-scale datasets suitable for training and evaluation of these approaches, limiting progress in this area. Additionally, most existing datasets (Hao et al., 2011; ∗ The work was completed while Aidan and Yuan were interning at Amazon. †Jan left Amazon, but the work was completed while he was working at Amazon. Jan Bakus† Amazon David Ciemiewicz Amazon ciemo@amazon.com Kevin Small Amazon smakevin@amazon.com Sandeep Atluri Amazon satluri@amazon.com Heba Elfardy ![0_image_0.png](0_image_0.png) Amazon helfardy@amazon.com Figure 1: A single item (i.e. product) from a list page for a board game (from gamesradar.com). Hotti et al., 2021) focus on one subset of the problem; namely *detail page extraction*. In this paper, we introduce the PLAtE (Pages of Lists Attribute Extraction) dataset1, that specifically targets the task of *list page* extraction and focuses on product review pages in the shopping vertical, i.e., multiproduct review pages. To elaborate, item pages can be broadly categorized into two classes: *detail pages* and *list pages*. Detail pages provide detailed information about a single item. List pages comprise a list of items with abridged detail, organized under a single theme, e.g. "best board games". This organization facilitates direct comparison of each item and allows for the extracted data to be easily integrated into recommender, question answering, or dialogue systems powering digital assistants (Linden et al., 2003; Gupta et al., 2019; Zhang et al., 2018). Extracted product data can be utilized by both content creators (publishers) as well as customers looking to make purchase decisions. 2 Because PLAtE is built from list (multi-item) pages, we can evaluate two tasks: segmentation 284 and attribute extraction. In the segmentation task, the model should determine the boundaries of each product, i.e. where the information for each product begins and ends. In the attribute extraction task, each node in the HTML DOM tree can be assigned any number of three labels: product name, product review text, and product link. The dataset is comprised of 52, 898 products from 6, 694 pages split into: training, development, and held-out test sets. To evaluate the dataset, we adapt two stateof-the-art neural web extraction models; MarkupLM (Li et al., 2021) and DOM-LM (Deng et al., 2022), as well as explore a text-based Transformer model, RoBERTa (Liu et al., 2019). We achieve an F1-score of 0.787 and 0.735 for segmentation and attribute extraction tasks respectively. We evaluate the potential of multi-task learning to improve performance and find that multi-task learning improves recall but slightly decreases precision, for both tasks. To summarize, our contributions are: (1) creating the first large-scale *list page* web extraction dataset, (2) adapting state-of-the-art neural web extraction approaches to the segmentation and attribute extraction tasks, and (3) qualitatively and quantitatively comparing the performance of different models on the two presented tasks; segmentation and attribute extraction. ## 2 Related Work The vast majority of previous web extraction datasets (e.g., SWDE (Hao et al., 2011), the Klarna Product Page Dataset (Hotti et al., 2021), and WDC (Petrovski et al., 2016)) are composed of single item pages. Multiple item or list page datasets are much less common. Zhu et al. (2006) created a dataset of 771 list pages, while Dhillon et al. (2011) created a small dataset (BSCM) of about 30 pages from 4 verticals: Books, Seminars, CS Faculty and MLConfs. Furche et al. (2012) built a dataset of 431 pages from two verticals: UK used car dealer websites collected from a used car aggregator website and UK real-estate websites collected from the yellow pages. To the best of our knowledge, PLAtE is the largest multi-item web extraction dataset. Additionally, most previous web extraction datasets assume a single DOM node has at most a single label. However, this assumption does not hold true in many product pages. For example, for many products, the product name's text-span is also a link to the product. Our dataset does not have this assumption; we allow a node to have multiple labels. Finally, most existing list page datasets are not publicly available. Table 1 compares the different web extraction datasets. From the methods side, web extraction has first been tackled using wrapper induction methods that create a set of rules (wrappers) to transform unstructured input into structured output (Kushmerick, 1997; Furche et al., 2014; Zheng et al., 2007; Azir and Ahmad, 2017; Gulhane et al., 2011; Carlson and Schafer, 2008; Furche et al., 2012). Recently, a number of advances have been made by utilizing neural-based approaches to construct a representation for each HTML node for the extraction task (Lockard et al., 2020; Lin et al., 2020; Zhou et al., 2021; Li et al., 2021; Deng et al., 2022; Xie et al., 2021). Other tasks related to semi-structured information extraction include boilerplate removal (Leonhardt et al., 2020), extraction from HTML tables (Cafarella et al., 2018; Deng et al., 2020; Wang et al., 2021; Herzig et al., 2020), and segmenting pages, e.g., using clustering followed by string alignment (Álvarez et al., 2008), optimization based on divide and conquer (Bing et al., 2013), and Hierarchical Conditional Random Fields (CRFs) (Zhu et al., 2006). ## 3 Plate Benchmark In this work, we tackle two tasks: *(1) segmentation*, i.e., identifying the boundaries of the individual products in a given page and *(2) attribute extraction*, i.e., identifying the individual attributes for each identified product. For each given product, we extract the following three attributes: (1) *product name*: This refers to the name of the product, e.g. "iPhone 11", (2) *review text*: This is generally a high-level review or general description of the product, and (3) *purchase link*: a link (or button) to a merchant's website (e.g., Amazon, Ebay). We commonly see generic text such as "Buy Here", the name of the merchant such as "Amazon", or the name of the product. Similar to prior work, we perform classification on the leaf nodes; i.e., only nodes that contain text are passed to the classification models. ## 3.1 Plate Construction Process To construct PLAtE, we started with 270M documents from the large, publicly available web crawl Common Crawl.3 We then filtered down to 6, 694 3https://commoncrawl.org/ | Name | Pages/Vert. | Sites/Vert. | Records/Vert. | Tot. Pages | Year | Mult. Item? | |------------------------------|---------------|---------------|-----------------|--------------|--------|---------------| | SWDE (Hao et al., 2011) | 4,405-20,000 | 10 | 4,405-20,000 | 124,291 | 2011 | ✗ | | WDC (Petrovski et al., 2016) | 576 | ? | 576 | 576 | 2016 | ✗ | | Klarna (Hotti et al., 2021) | 51,701 | 8,175 | 51,701 | 51,701 | 2019 | ✗ | | LDST (Zhu et al., 2006) | 771 | ? | ∼ 8600 | 771 | 2006 | ✓ | | (Dhillon et al., 2011) | 5-15 | 5-8 | ∼ 400 | ∼ 30 | 2011 | ✓ | | AMBER (Furche et al., 2012) | 150-281 | 100-150 | 1608-2785 | 431 | 2012 | ✓ | | PLAtE | 6,694 | 43 | 52,898 | 6,694 | 2020 | ✓ | candidate pages from 43 internet websites by (1) removing duplicate URLs and non-English pages, (2) filtering out non-multi-product pages using a linear classifier with word embedding and TF-IDF features as well as keywords-based heuristics (e.g., "best", "compare", etc.), (3) selecting top/popular websites using the Tranco List (Pochat et al., 2019), (4) selecting sites with the highest number of list pages, and (5) filtering out pages with inappropriate (i.e. sensitive or offensive) content. After selecting the candidate pages, we performed the initial segmentation and attribute extraction by using CSS selector rules.4 Two expert annotators used a custom annotation tool to annotate a representative page from each site by selecting a CSS selector for each attribute. The annotated CSS selectors were then used to extract the attributes from the rest of the pages from the same site. Multiple rounds of quality checks were performed in order to ensure the quality of the final selector rules. The final step in creating PLAtE used Amazon Mechanical Turk annotations in order to remove any errors introduced by the rule-based extraction step. For the Mechanical Turk annotation task, we presented a web page to the annotators. We first asked the annotators a set of page-level questions to ensure that the web page is a valid multi-product review page. We then asked the annotators to verify that a piece of highlighted text within the web page should be extracted as an attribute and asked them to indicate if any text of the attribute of interest was not highlighted, i.e., was not captured by the rules.5 Overall, 20% of workers that attempted the 4CSS selector rules are patterns composed of HTML tag names and HTML class names used to select one or more HTML elements (e.g., [.product-slide p]) 5We only qualified annotators from English speaking countries that completed at least 100 prior annotation tasks with an acceptance rate of 95% or more, and who passed a custom Table 2: An example PLAtE annotation. | Site | theinventory.com | |----------------|---------------------------------------| | URL | theinventory.com/best-iphone ... | | Product Index | 4 | | Attr. Name | Product Name | | Num. Extracted | 1 | | XPath | /html/body/div[3]/ ... /span/a/strong | | Text | ['AirPods Pro'] | Table 3: Statistics of the train, development, and test sets. | Split | # Sites | # Pages | # Products | # Attrs | |---------|-----------|-----------|--------------|-----------| | Train | 28 | 4, 202 | 35, 383 | 103, 731 | | Dev | 5 | 655 | 6, 038 | 18, 019 | | Test | 10 | 1, 837 | 11, 477 | 34, 264 | | All | 43 | 6, 694 | 52, 898 | 156, 014 | qualification task were qualified, resulting in 77 annotators. To identify spammers, we blocked any annotator that spent less than 20 seconds on average per annotation task. Finally, to minimize annotation costs while ensuring high-quality evaluation and development data, we used one annotation per task for the training set and three annotations per task for the development and test sets. Majority vote was used to aggregate the annotations from the development and held-out test sets. To build the final dataset, we split the data such that the training, development, and held-out test sets have approximately the same distribution in terms of number of products and pages. Moreover, we ensured that sites from the same website family, e.g., *thespruce.com* and *thespuceeats.com*, appear in the same split. Table 2 shows a sample PLAtE annotation, while Table 3 shows statistics of the dataset. ![3_image_0.png](3_image_0.png) | Attribute | Tag | Text | Crowd Annotations | Gold | |--------------|-------|---------------------------------------------------------------|----------------------|--------| | Review Text | <p> | And those are our recommendations for the best mattresses!... | [True, False, False] | False | | Product Link | <a> | Tempur-Pedic | [True, True, False] | True | | Review Text | <p> | And those are our recommendations for best outdoor grills... | [True, True, False] | False | | Product Link | <a> | Original Sprout's miracle detangler | [True, False, False] | True | | Attribute | Missing(%) | Valid (%) | Fleiss-Kappa | |--------------|--------------|-------------|----------------| | Product Name | 2.7 | 97.0 | 0.596 | | Review Text | 2.3 | 85.1 | 0.728 | | Product Link | 6.0 | 96.7 | 0.560 | ## 3.2 Dataset Analysis We manually analyzed PLAtE annotations to assess their quality. In the annotation task, as seen in Figure 2, the first two annotation questions were designed to filter out cases of mislabeled or malformed snippets. Specifically, the first question asked whether the annotation snippet contained at least one product while the second question asked if the snippet contained more than one product. Annotators indicated that 99.6% of snippets contained at least one product and that only 0.8% contained more than one product, indicating that the vast majority of tasks sent to annotators were valid product snippets. The next set of annotation questions are meant to identify if any annotations were missing from the presented snippet. Between 3% and 6% of the extracted attributes were missing some text spans. Finally, the last set of questions asked the annotators to select the text-spans that matched an attribute from a set of check-boxes (all text-spans matching the CSS rule). Overall, more than 85% of the text-spans were correctly matched to the corresponding attribute. We also calculated the inter-annotator agreement using Fleiss-Kappa (Fleiss and Cohen, 1973) to measure the quality of the annotation guidelines and found that the inter-annotator agreement is moderate for "*Product Name*" and "*Product Link*" and substantial for "*Review Text*". For "*Product Link*", some product name spans were also product link spans, causing confusion among the raters. The annotation statistics are shown in Table 5. Percentage of missing attributes is relatively low indicating that our CSS selector rules have high recall. Additionally, percentage of valid extractions is quite high especially for *Product Name* and *Product Link* indicating that our CSS selector rules have high precision. Table 4 shows examples of annotator disagreement. The third example is challenging because the text is about outdoor grills in general, and not a particular outdoor grill product, so it should be annotated as False. In the fourth example, the annotators likely missed the correct label (*True*), because the link does not follow the standard format of "*Buy Now*" or a retailer's name such as "*Amazon*". ## 4 Models We evaluate the performance of three recent neural models: RoBERTa (Liu et al., 2019), DOM-LM (Deng et al., 2022) and MarkupLM (Li et al., 2021) on PLAtE. | Attribute Extraction | Segmentation | | | | | | | | |------------------------|----------------|--------|---------|--------|--------|---------|----------|----------| | Model | Test P | Test R | Test F1 | Test P | Test R | Test F1 | Test ARI | Test NMI | | RoBERTa | 0.843 | 0.652 | 0.735 | 0.692 | 0.665 | 0.678 | 0.693 | 0.744 | | DOMLM | 0.815 | 0.655 | 0.726 | 0.718 | 0.728 | 0.722 | 0.716 | 0.764 | | MarkupLM | 0.839 | 0.620 | 0.711 | 0.769 | 0.805 | 0.787 | 0.771 | 0.870 | RoBERTa is a Transformer-based (Vaswani et al., 2017) model pre-trained with natural language texts sourced from the BookCorpus (Zhu et al., 2015) and Wikipedia. The pre-training task is masked language modeling. In our experiments, the input to RoBERTa is a sequence of text tokens from the DOM tree. Consequently, it does not utilize other types of information from the DOM tree such as XPaths or HTML tags. DOM-LM is designed to generate contextualized representations for HTML documents. It uses RoBERTa as the base encoder and is pre-trained over the SWDE dataset (Hao et al., 2011) with masked language modeling over the text tokens as well as the DOM tree nodes. To encode a DOM tree, DOM-LM first slices the tree into a set of subtrees such that important tree-level context is kept in each subtree. Then each node is represented by its tag, text, class/id attributes, as well as a positional matrix based on its position in the DOM tree. MarkupLM is another RoBERTa-based model. The input to MarkupLM is a node represented by both an XPath embedding and its text. The XPath embedding is created by embedding each tag and subscript in the XPath separately and then concatenating and passing them through a FFN layer. The model was pre-trained on three tasks: (1) masked markup language modeling, (2) node relation prediction, and (3) title page matching using 24 million pages from Common Crawl. For the segmentation task, we label each of the nodes as *begin* (B) to denote the first text-span of a product, or *other* (O) for the rest of the nodes. We then apply a softmax layer to the logits and train with cross-entropy loss. For the attribute extraction task, the model predicts the attribute labels from the logits using a multi-label sigmoid layer that was trained with binary cross-entropy loss. As a multi-label classification task, the model can assign each node with any subset of the labels or no label. In both tasks, we begin with one of the three pretrained models and then fine-tune on the training set of PLAtE. ## 5 Experiments For both segmentation and attribute extraction tasks, we report the precision, recall, and F1score. Results are macro-averaged over the different classes. In addition, for segmentation, we report clustering metrics following (Bing et al., 2014), where the attributes in the same segment are considered to be in the same cluster. In our dataset, when looking at two adjacent products in a page, there is a single node which we label as a "segmentation boundary". In reality, there can be multiple nodes which appear between two adjacent products and split the products apart. If multiple valid segmentation boundaries for a product are possible, F1-score will penalize the model for picking any segmentation boundary which is not labelled as such. On the other hand, the clustering metrics provide a relaxation which checks that product attributes are assigned to the correct product. For clustering metrics, we report adjusted rand index (ARI) (Hubert and Arabie, 1985) and normalized mutual information (NMI) (Strehl and Ghosh, 2002) where a higher number indicates better performance. 6,7 Overall, we observe that MarkupLM performs well on the segmentation task yielding scores of 0.787, and 0.771 and 0.870 for F1, ARI and NMI respectively while DOM-LM performs worse on this task with scores of 0.722 and 0.716 and 0.764 for F1, ARI and NMI respectively. This can likely be attributed to the fact that MarkupLM was pretrained on the task of relation prediction between nodes. Identifying if two nodes have a parent-child, sibling, or ancestor-descendant relation could help the model distinguish nodes within the same product from nodes of different products, and conse- Model Attribute P R F1 Product Name 0.858 0.645 0.737 RoBERTa Review Text 0.922 0.721 0.808 Product Link 0.750 0.590 0.661 Product Name 0.843 0.672 0.747 DOM-LM Review Text 0.863 0.695 0.769 Product Link 0.741 0.599 0.662 Product Name 0.885 0.621 0.730 MarkupLM Review Text 0.822 0.671 0.737 Product Link 0.808 0.569 0.667 ![5_image_0.png](5_image_0.png) ## Quently Identify The Product Boundaries Better. The attribute extraction task shows a different trend from segmentation where RoBERTa performs the best, outperforming MarkupLM and DOM-LM. We look into the reason behind this in Section 6. Detailed results for both tasks are presented in Table 6 while precision, recall, and F1-score for the attribute extraction task broken down by attribute are shown in Table 7. We find that product link extraction performs worst while product review text performs best. For product link, this difficulty can be attributed to the diverse nature of product links which can take the form of a *product name*, or the price of the product e.g., a hyperlink with the text $10 or a *button* (e.g. with the text "Buy Now"). For review text, the higher performance is not surprising given that it normally has a more consistent style than product links. To better understand the different factors affecting the performance of the attribute extraction task, we (1) break down the scores by site in our test set to see whether some sites are more challenging than others, (2) analyze the relationship between the number of products in a page and attribute extraction performance (3) explore whether the index of the product (i.e., where it appears in the page) affects the performance, (4) study the effect of adding more pages (versus sites) to the training data, and (5) explore whether a multi-task model that jointly tackles both segmentation and attribute extraction tasks can yield better performance through utilizing the complementary signals between both tasks. As can be seen in Figure 3, extraction performance varies greatly by site. We suspect that this is due to the diverse page layouts and styles of different sites. When grouping pages by number of products, we find that for all three models, as the number of products increases, the F1-score improves. We believe this is due to the increased ease of the model to learn patterns within a page given more examples in a page. When grouping the scores by product index, we find that product index does not have a significant impact on the performance of attribute extraction; i.e. performance is relatively uniform across different locations in the page.8 To study the value of annotating more sites compared to annotating more pages, we sampled two subsets of pages from our original training set. We collected all pages from 14 randomly sampled sites from the training set for a total of 1514 pages, then sampled 1514 random pages across all sites in the training set. The 14 sites achieved an F1=0.595 compared to sampling from all sites achieving an F1=0.681. From this, we can conclude there is indeed value in annotating a greater sites instead of simply annotating more pages from the same site. We suspect the model is better able to generalize to the test set, when provided with a more diverse training set of different sites. Finally, we look into whether multi-task learning improves model performance on PLAtE. To this end, we train a MarkupLM model with a shared encoder, but distinct loss functions and output layers for the segmentation and attribute extraction tasks.9 We find that multitask learning improves the recall of both tasks - as shown in Table 8 - but slightly harms precision. F1-score performance increases by 0.023 for the segmentation task but goes down for the attribute extraction task due to the decrease in precision. 8Figures showing the detailed results of these experiments are provided in the appendix. 9We use a weighted sum of the loss functions for both tasks and determine the weights empirically based on the performance on the development set. | Task | Setting | P | R | F1 | |-------------------|------------------|-------|-------|-------| | Attribute | No MTL | 0.839 | 0.620 | 0.711 | | Extraction | MTL (weight=0.5) | 0.803 | 0.628 | 0.703 | | Segmentation | No MTL | 0.769 | 0.805 | 0.787 | | MTL (weight=0.25) | 0.768 | 0.859 | 0.810 | | Table 8: Precision, recall, and F1 for Multitask Learning (MTL) for the MarkupLM model. ## 6 Discussion Contrary to our expectations, RoBERTa outperformed DOM-LM, a model specially designed to model webpages, on the attribute extraction task. As can be seen in Table 7, RoBERTa mainly outperforms DOM-LM in review text precision. We analyze 20 examples where DOM-LM makes a false positive review text error and RoBERTa does not. We find in 95% of the examples, the misclassified node is outside of the main review content, and in 80% of the examples the misclassified node is a <p> tag. This indicates that DOM-LM's structural encoder is likely over-fitting to the HTML tag and disregarding the text content, hence is less able to generalize to unseen HTML structures. We perform additional analysis of the outputs from the MarkupLM model, to identify areas for improvement for the attribute extraction task. Specifically, we sampled up to 5 false positive (FP) and 5 false negative error examples for each attribute from each site in the test set, and ended up with a total of 257 examples.10 For product review text, we find that many of the false negatives are very short textspans such as "*These*", "as well", and "*require*" representing a single text leaf node within a larger paragraph. We suspect that the model has not been trained with enough examples of varied text length. In the future, we could also consider training the model to classify the parent nodes representing a whole paragraph instead of classifying at the leaf node level. For false positives, we find 4 examples of user-written comments (as opposed to reviewer-written text). While these do not represent the official review text, they are semantically similar to the official review text hence are easily confused by the model. One such example is a user's comment mentioning that "*the design and functionality of these cookers* is top-notch" which has a similar style to text which could have been written by a reviewer. For 10Some sites had less than 5 errors, in which case we examined all errors. product link, 17 false positives are hyperlinks that link to a homepage rather than to a specific product which indicates that the model is over-fitting to the <a> tag. Training the model using contrastive learning with positive and negative product link <a> tag examples should help with this case. Next we explore the effect of threshold-tuning on the performance of attribute extraction. In the presence of an oracle that specifies whether or not a node should be tagged with an attribute, we can use the *argmax* of the logits of the different attribute classes in order to guarantee that the node is tagged with one of the three attributes. Based on the very high agreement between the *argmax* and the actual label for product name (89.4%) and product review text (91.8%), it is clear that some of the challenges in tagging nodes in PLAtE stem from needing to determine whether or not a node should be tagged with any attribute, on top of determining what the correct attribute tag is. Finally, we analyze the errors in the segmentation task, collecting all pages with an ARI and NMI < 0.5 within test set. We find that in 97% of these pages the number of gold segmentation boundaries is higher than the number of predicted segmentation boundaries. This means that when ARI and NMI are very low, the model is failing to split the page into enough segments. To improve performance, we could consider utilizing a structured prediction to model segmentation interdependencies e.g. the model could explicitly model that it is unlikely that two segmentation boundaries appear directly next to one another. ## 7 Conclusion In this work, we introduce PLAtE, the first largescale list page dataset for web extraction. We describe our methodology in creating the dataset, from pre-processing to crowd-sourced annotation collection. We evaluate the performance of three strong web extraction baselines and achieve an F1score of 0.787 and 0.735 on the segmentation and attribute extraction tasks respectively. While we focus on shopping domain due to its importance to several downstream applications, in the future we intend to extend our work to other verticals to facilitate further research studying model generalization and domain adaptation. ## Acknowledgements We would like to thank the anonymous reviewers for their detailed and insightful comments and suggestions. We would also like to thank Wendy Wei, Markus Dreyer, Polly Allen, Yusuke Watanabe, Matthias Petri, Tian Tang, Phi Nguyen, Ritesh Sarkhel, Hengxin Fun, Mengwen Liu, and Duke Vijitbenjaronk for their suggestions and assistance with this work. ## Limitations To ensure high quality extractions for PLAtE, we optimize our annotation process for precision. For example, for the Product Link attribute, we generally annotate only one product link per product. In an application scenario, the user would not need multiple links to a purchase page, but this could potentially harm the precision of the evaluated models. In addition, we assume that all attributes are text-based. This has the potential of missing additional product information which could be helpful to users, such as images of the product. In future work, we would like to extend PLAtE by incorporating other modalities. ## References Mohd Azir and Kamsuriah Ahmad. 2017. Wrapper approaches for web data extraction : A review. *2017 6th* International Conference on Electrical Engineering and Informatics (ICEEI), pages 1–6. Lidong Bing, Rui Guo, Wai Lam, Zheng-Yu Niu, and Haifeng Wang. 2014. Web page segmentation with structured prediction and its application in web page classification. In *Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval*, SIGIR '14, page 767–776, New York, NY, USA. Association for Computing Machinery. Lidong Bing, Wai Lam, and Tak-Lam Wong. 2013. Robust detection of semi-structured web records using a dom structure-knowledge-driven model. *ACM Trans.* Web, 7(4). Michael J. Cafarella, Alon Y. Halevy, Hongrae Lee, Jayant Madhavan, Cong Yu, Daisy Zhe Wang, and Eugene Wu. 2018. Ten years of webtables. Proc. VLDB Endow., 11:2140–2149. Andrew Carlson and Charles Schafer. 2008. Bootstrapping information extraction from semi-structured web pages. In *Proceedings of the 2008th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I*, ECMLPKDD'08, page 195–210, Berlin, Heidelberg. Springer-Verlag. Xiang Deng, Prashant Shiralkar, Colin Lockard, Binxuan Huang, and Huan Sun. 2022. Dom-lm: Learning generalizable representations for html documents. arXiv preprint arXiv:2201.10608. Xiang Deng, Huan Sun, Alyssa Lees, You Wu, and Cong Yu. 2020. Turl: Table understanding through representation learning. Paramveer S. Dhillon, Sundararajan Sellamanickam, and Sathiya Keerthi Selvaraj. 2011. Semi-supervised multi-task learning of structured prediction models for web information extraction. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, page 957–966, New York, NY, USA. Association for Computing Machinery. Joseph L. Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. *Educational* and Psychological Measurement, 33:613 - 619. Tim Furche, Georg Gottlob, Giovanni Grasso, Xiaonan Guo, Giorgio Orsi, Christian Schallhart, and Cheng Wang. 2014. Diadem: Thousands of websites to a single database. *Proc. VLDB Endow.*, 7(14):1845–1856. Tim Furche, Georg Gottlob, Giovanni Grasso, Giorgio Orsi, Christian Schallhart, and Cheng Wang. 2012. Amber: Automatic supervision for multi-attribute extraction. Pankaj Gulhane, Amit Madaan, Rupesh Mehta, Jeyashankher Ramamirtham, Rajeev Rastogi, Sandeep Satpal, Srinivasan H Sengamedu, Ashwin Tengli, and Charu Tiwari. 2011. Web-scale information extraction with vertex. In 2011 IEEE 27th International Conference on Data Engineering, pages 1209–1220. Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, and Zachary C Lipton. 2019. Amazonqa: A review-based question answering task. arXiv preprint arXiv:1908.04364. Qiang Hao, Rui Cai, Yanwei Pang, and Lei Zhang. 2011. From one tree to a forest: A unified solution for structured web data extraction. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11, page 775–784, New York, NY, USA. Association for Computing Machinery. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 4320–4333, Online. Association for Computational Linguistics. Alexandra Hotti, Riccardo Sven Risuleo, Stefan Magureanu, Aref Moradi, and Jens Lagergren. 2021. The klarna product page dataset: A realistic benchmark for web representation learning. Lawrence J. Hubert and Phipps Arabie. 1985. Comparing partitions. *Journal of Classification*, 2:193–218. Nicholas Kushmerick. 1997. Wrapper induction for information extraction. University of Washington. Jurek Leonhardt, Avishek Anand, and Megha Khosla. 2020. Boilerplate removal using a neural sequence labeling model. Companion Proceedings of the Web Conference 2020. Junlong Li, Yiheng Xu, Lei Cui, and Furu Wei. 2021. Markuplm: Pre-training of text and markup language for visually-rich document understanding. Bill Yuchen Lin, Ying Sheng, Nguyen Vo, and Sandeep Tata. 2020. Freedom: A transferable neural architecture for structured information extraction on web documents. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. G. Linden, B. Smith, and J. York. 2003. Amazon.com recommendations: item-to-item collaborative filtering. *IEEE Internet Computing*, 7(1):76–80. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. Zeroshotceres: Zeroshot relation extraction from semi-structured webpages. Petar Petrovski, Anna Primpeli, Robert Meusel, and Christian Bizer. 2016. The wdc gold standards for product feature extraction and product matching. In EC-Web, volume 278 of Lecture Notes in Business Information Processing, pages 73–86. Victor Le Pochat, Tom van Goethem, Samaneh Tajalizadehkhoob, Maciej Korczynski, and Wouter Joosen. ´ 2019. Tranco: A research-oriented top sites ranking hardened against manipulation. Proceedings 2019 Network and Distributed System Security Symposium. Alexander Strehl and Joydeep Ghosh. 2002. Cluster ensembles - a knowledge reuse framework for combining multiple partitions. *J. Mach. Learn. Res.*, 3:583– 617. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Daheng Wang, Prashant Shiralkar, Colin Lockard, Binxuan Huang, Xin Luna Dong, and Meng Jiang. 2021. Tcn: Table convolutional network for web table interpretation. *Proceedings of the Web Conference 2021*. Chenhao Xie, Wenhao Huang, Jiaqing Liang, Chengsong Huang, and Yanghua Xiao. 2021. Webke: Knowledge extraction from semi-structured web with pre-trained markup language model. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM '21, page 2211–2220, New York, NY, USA. Association for Computing Machinery. Yongfeng Zhang, Xu Chen, Qingyao Ai, Liu Yang, and W. Bruce Croft. 2018. Towards conversational search and recommendation: System ask, user respond. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM '18, page 177–186, New York, NY, USA. Association for Computing Machinery. Shuyi Zheng, Ruihua Song, Ji-Rong Wen, and Di Wu. 2007. Joint optimization of wrapper generation and template detection. In *Proceedings of the 13th* ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '07, page 894–902, New York, NY, USA. Association for Computing Machinery. Yichao Zhou, Ying Sheng, Nguyen Vo, Nick Edmonds, and Sandeep Tata. 2021. Simplified dom trees for transferable attribute extraction from the web. Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, and Wei-Ying Ma. 2006. Simultaneous record detection and attribute labeling in web data extraction. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '06, page 494–503, New York, NY, USA. Association for Computing Machinery. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In *Proceedings of the IEEE international conference on computer vision*, pages 19–27. Manuel Álvarez, Alberto Pan, Juan Raposo, Fernando Bellas, and Fidel Cacheda. 2008. Extracting lists of data records from semi-structured web pages. Data & Knowledge Engineering, 64(2):491–509. ## A Training Details We train all models for up to 5 epochs using an AdamW optimizer with a linear warm-up rate of 10% and decide on the best learning rate based on development set performance. We search over learning rates of: {5e-6, 5e-5, 5e-4}. The best learning rate was 5e-5 for both tasks for RoBERTa and attribute extraction for DOM-LM. 5e-6 was the best learning rate for both tasks for MarkupLM and segmentation for DOM-LM. ## B Additional Tables | Attribute | Argmax Correct (%) | |--------------|----------------------| | Product Name | 89.4 | | Review Text | 91.8 | | Product Link | 16.2 | Table 9: Percentage of false negative examples where the model would be correct if the *argmax* of the attributes was chosen instead of requiring the activation to be above the threshold 0.5. ## C Additional Figures Figure 4: An example extraction "*Godinger Luminal* ![9_image_1.png](9_image_1.png) Hobnail Glasses" which is both a product name and product link. ![9_image_0.png](9_image_0.png) ![10_image_0.png](10_image_0.png)
liu-etal-2023-rapid
Rapid Diffusion: Building Domain-Specific Text-to-Image Synthesizers with Fast Inference Speed
https://aclanthology.org/2023.acl-industry.28
Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specific applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artificial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values.
# Rapid Diffusion**: Building Domain-Specific Text-To-Image Synthesizers With** Fast Inference Speed Bingyan Liu1,2∗and **Weifeng Lin**1,2∗and **Zhongjie Duan**3,2and **Chengyu Wang**2† Ziheng Wu2and **Zipeng Zhang**2and **Kui Jia**1†and **Lianwen Jin**1 Cen Chen3and **Jun Huang**2 1South China University of Technology, Guangzhou, China 2Alibaba Group, Hangzhou, China 3East China Normal University, Shanghai, China {eeliubingyan, eelinweifeng}@mail.scut.edu.cn, zjduan@stu.ecnu.edu.cn {chengyu.wcy, zhoulou.wzh, zhangzipeng.zzp}@alibaba-inc.com kuijia@gmail.com, eelwjin@scut.edu.cn, cenchen@dase.ecnu.edu.cn huangjun.hj@alibaba-inc.com ## Abstract Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusionbased image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domainspecific applications due to the lack of entity knowledge and low inference speed. In this paper, we propose *Rapid Diffusion*, a novel framework for training and deploying superresolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artificial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate *Rapid Diffusion* to an AI platform to show its practical values. 1 ## 1 Introduction Text-to-Image Synthesis (TIS) is a prevalent multimodal task that aims to generate realistic images based on textual inputs, which supports real-world applications such as product appearance design and art creation. Apart from Generative Adversarial Network (GAN)-based approaches (Agnese et al., 2020), recently, pre-trained diffusion models (Rombach et al., 2022; Ramesh et al., 2022) have been proposed to create artistic images with qualities comparable to or better than those from humans. Despite the exciting advancement, for industrial domain-specific applications, we suggest that popular latent diffusion models from the open-source community (such as the Stable Diffusion model series2) are incapable of supporting those applications. The reasons are are twofolds. i) For diffusion-based methods, a CLIP-based text encoder (or other similar models) is required to encode the input texts, providing conditional inputs for the U-Net model (Rombach et al., 2022). As entities (or objects) are usually the key elements for generated images, CLIP models pre-trained over text-image pairs collected from the Web may need more abilities of concept understanding and are challenging to capture the specific entity knowledge required for realistic image generation (Ma et al., 2022). ii) For industrial applications, the model inference speed and the computational cost are vital factors to be considered. The cumbersome computation of the iterative diffusion process is often the bottleneck of fast inference (Song et al., 2021). Therefore, obtaining knowledgeable diffusion models to generate high-resolution images with moderate parameter sizes and optimized implementations that support fast online inference is desirable. To address the above issues, we propose *Rapid* Diffusion, a novel framework for the training and deploying text-to-image diffusion models with rich entity knowledge injected and networks optimized. In *Rapid Diffusion*, a knowledge-enhanced CLIP model is effectively trained for learning entity knowledge from knowledge graphs (KGs). To 2https://stability.ai/blog/ stable-diffusion-public-release generate high-resolution images and avoid parameter explosion, we integrate an ESRGAN-based network (Wang et al., 2018) after the diffusion block for image super-resolution, instead of directly leveraging a large-scale hierarchical diffusion model. For online deployment, an efficient inference pipeline is designed with the neural architectures optimized based on FlashAttention (Dao et al., 2022). The Intermediate Representation (IR) of computational graphs built from the generated models are further processed by a recently released Artificial Intelligence (AI) compiler (Zhu et al., 2021). In the experiments, we evaluate the effectiveness of *Rapid Diffusion* in terms of the qualities of generated images from multiple application domains and the model inference speed for online deployment. We also provide industrial use cases to show how our framework benefits real-world applications. In addition, we have integrated the proposed training and deployment workflows into an industrial, cloud-native AI platform to facilitate zero-code model training and elastic inference on distributed GPU clusters. In summary, the major contributions of this work are as follows: - We propose the *Rapid Diffusion* framework for the training and deployment of domainspecific diffusion-based TIS models. Specifically, a new knowledge-enhanced model pipeline is designed for super-resolution TIS. An efficient inference pipeline is further designed to optimize the computational graphs of our model for faster model inference. - Experiments over multiple domains show the effectiveness of *Rapid Diffusion* in terms of both image quality and inference speed, achieving an average FID score of 21.90 and ×1.73 acceleration ratio compared to all the counterparties. - We demonstrate the industrial use case and the integration of *Rapid Diffusion* to a cloudnative AI platform to show its practical values for real-world applications. ## 2 Related Work 2.1 Text-To-Image Synthesis (Tis) TIS is a multi-modal task of converting texts to images with the same semantic meanings. In the early years, traditional methods (Zhu et al., 2007) mainly focused on analyzing the correlations between sentences and images but could not generate new images on the pixel level. Generative Adversarial Network (GAN) (Goodfellow et al., 2014) was proposed in 2014 and became the mainstream approach in the image synthesis field (Agnese et al., 2020). GANs and their variants (Reed et al., 2016; Liu et al., 2022b) have proved their effectiveness in TIS but still lack the ability to generate high-resolution images. Diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015) have attracted the attention of researchers in recent years. Leveraging large-scale text-image datasets, pre-trained diffusion models (Rombach et al., 2022; Ramesh et al., 2022) become competitive with human painters. However, these diffusion models need help with the efficiency problem and more knowledge for the generation process. ## 2.2 Efficient Methods For Diffusion Models Diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015) typically add noise to images (or latent tensors generated from images) and then learn to denoise step by step. The number of steps while training may be very large, making the sampling time-consuming. To improve the sampling efficiency, a recent study (Salimans and Ho, 2022) introduces knowledge distillation to diffusion models. This acceleration method can reduce the steps but requires additional training. Another family of methods tries to construct new samplers without further training. For example, DDIM (Song et al., 2021) uses a deterministic generative process to produce images much faster. Some numerical solvers, including forward Euler and linear multistep method (Butcher, 2000), are leveraged to reduce the steps (Karras et al., 2022). By using a pseudo numerical algorithm to solve differential equations on manifolds, PNDM (Liu et al., 2022a) further improves the generation quality within a few given steps. The better implementation of attention algorithms can speedup the process, which requires fewer IO accesses (Dao et al., 2022). Colossal-AI (Bian et al., 2021) accelerates the training speed of diffusion models and reduces the GPU memory usage for deployment. In addition, when diffusion models are deployed online, the amount of computation can be reduced with better-complied computational graphs of these models. For example, TensorRT3 provides an inference optimizer and runtime that achieves lower 3https://github.com/NVIDIA/TensorRT latency of model inference. In our work, we integrate various techniques from both modeling and engineering aspects to deliver better training and inference experiences for diffusion-based, domainspecific TIS applications. ## 3 The Proposed Framework In this section, we formally present the techniques of the proposed *Rapid Diffusion* framework in detail. A brief overview of *Rapid Diffusion* is presented in Figure 1. ## 3.1 Model Architecture As seen in Figure 1, our model converts input texts to high-resolution images by modeling the transformation and interaction between three representation spaces: i) knowledge-enhanced text embedding space, ii) latent space, and iii) pixel space. ## 3.1.1 Knowledge-Enhanced Text Embedding Space In this stage, we aim to encode the semantics of input texts to text embeddings. A common practice is to leverage the text encoder of CLIP (Radford et al., 2021), which jointly learns textual and visual representations in a unified space. Yet, CLIP pre-trained over plain text-image pairs may have weak representation power of entities. In our work, we leverage the 100 million text-image pairs from Wukong (Gu et al., 2022) as our multimodal pre-training corpus, as our real-world applications mostly focus on the Chinese language. For entities, we leverage the largest Chinese KG available to us, i.e., OpenKG4(containing over 16 million entities and 140 million relation triples). During the CLIP pre-training process, the input representation of an entity token e appearing in a sentence of the Wukong corpus is augmented by: ⃗e = ⃗etxt + ⃗ekg where ⃗etxt is the vanilla token embedding of the entity e, and ⃗ekg is the KG embedding derived by the TransE algorithm (Bordes et al., 2013) due to its effectiveness and simplicity. Note that although we focus on the pre-training of Chinese Knowledge-enhanced CLIP (CKCLIP) models here, our method is language-invariant and can be applied to other languages with minor modifications. During the fine-tuning process of our domainspecific TIS models, the parameters of the text 4http://openkg.cn/ encoder of our CKCLIP model are set to be trainable to capture more domain-related semantics. We further add some text prompts to the input text according to the application scenario (e.g., "the photo of [object]", "the picture of [object]") and obtain its CLIP representation as the conditional input to the next stage. ## 3.1.2 Latent Space According to the given knowledge-enhanced text embedding ⃗e, we use a latent diffusion model to generate image encoding with similar semantic meaning in a latent space. The model architecture is U-Net (Ronneberger et al., 2015) with a cross-attention mechanism capturing the textual conditioning information. In the training stage, the image x is encoded into the latent space and then we add Gaussian noise ϵ ∼ N (0, 1) to obtain xt, where t = 1*, . . . , T* is the step in the diffusion process. The loss function of image reconstruction is formulated as follows: $$\mathcal{L}_{\text{LDM}}=\mathbb{E}_{x,t\sim\mathcal{U}(1,...,T),\epsilon\sim\mathcal{N}(0,1)}\left(||\epsilon-\epsilon_{\theta}(x_{t},t)||_{2}^{2}\right)\tag{1}$$ (1) The generation process is the reverse of the diffusion process. Starting from the random Gaussian noise xT , the latent diffusion model gradually denoises the latent tensor and successively calculates xT −1, xT −2*, . . . , x*1. To improve the correlation between the generated image and the input prompt, we use the classifier-free guidance (Ho and Salimans, 2021) to generate the corresponding images. Additionally, to avoid the efficiency problem caused by too large step T, we employ PNDM (Liu et al., 2022a) scheduler to reduce the steps. In our work, we also pre-train the latent diffusion model using the Wukong dataset (Gu et al., 2022) and fine-tune the model using every downstream dataset respectively. ## 3.1.3 Pixel Space After generating the final latent x0, a KLregularized decoder D reconstructs the image from x0 in the pixel space. In our design, the generated images are not necessarily in high resolution. Instead, an ESRGAN-based network (Wang et al., 2018) is applied after the decoder such that after a single forward pass, a corresponding highresolution image can be generated. An alternative design choice is directly generating high-resolution images using the diffusion model. However, this setting can be sub-optimal to satisfy the requirements of moderate model size and fast inference ![3_image_0.png](3_image_0.png) speed. Consider a base 256 × 256 diffusion-based U-Net model that employed for the TIS task, where one with a 256 × 256 −→ 1024 × 1024 diffusionbased super-resolution U-Net model and the other with an ESRGAN-based model. The former contains more parameters and requires more steps for inference, resulting in its inference time being several times slower than the latter. Therefore, we adopt an ESRGAN-based model to generate highresolution images efficiently. ## 3.2 Inference Speedup Designs The inference process of the proposed model in this paper consists of three main components. We profile the inference speed of the original PyTorch model in eager mode and observe that the bottleneck is primarily located in the loop of the U-Net model, where the cross-attention computation dominates the inference time. The profiling result can be seen in Figure 2. To resolve this issue, we incorporate automatic slicing and compilation optimization techniques to optimize the entire pipeline in an end-to-end manner and introduce an IO-aware attention implementation to enhance the inference performance further. ## 3.2.1 Compilation Optimization Our algorithm generates various low-level runtime flows for models with dynamic shapes on specific devices. It is achieved by enhancing a set of IR to create a complete dynamic shape representation (Zhu et al., 2021). For the operations ![3_image_1.png](3_image_1.png) with intensive memory access, we fully utilized shared memory to design larger-grained kernel fusion strategies, effectively reducing the CPU/GPU switches (Zheng et al., 2022). Optimal graph partitioning and kernel implementation selection are performed for optimal inference speed. The optimization has been applied throughout the computing module, resulting in a significant improvement in inference speed. ## 3.2.2 Effective Io-Aware Attention Based on the automatic compilation optimization, we further utilize the FlashAttention technique (Dao et al., 2022) for the cross-attention operator of U-Net, which is the core of the network's inference bottleneck. The technique is based on the attention IO characteristics and performs tiling operations on the attention calculation to reduce memory read-write computation. We introduce different FlashAttention kernel implementations for various combinations of computing devices and hardware architectures and dynamic inputs. The technique mentioned in the previous section effectively assists us in automatically finding the optimal implementation. As a result, the cross-attention calculation can be accelerated without deviation, yielding a 1.9× speed-up for the U-Net module. ## 4 Experiments 4.1 Experimental Settings We first pre-train the CKCLIP model in the following experiments using the text-image pairs from Wukong (Gu et al., 2022) and the OpenKG. After that, the text encoder of CKCLIP and our diffusion model are pre-trained using the same data source. We fine-tune and evaluate the model over three domain-specific datasets to show the values of *Rapid Diffusion* in real-world applications. Implementation details and parameter settings can be found in the appendix. ## 4.2 Results Of Three Application Scenarios We report the performance of *Rapid Diffusion* over three domain-specific scenarios (i.e., E-commerce5, Chinese Painting (Li et al., 2021) and Cuisine, which are closely related to our applications) in terms of Frechet Inception Distance (FID) (Heusel et al., 2017) score. Details of the three datasets, together with the training/validation/testing splits, are given in Table 4 in the appendix. We compare our model with three popular open-source diffusion models, namely Stable diffusion6, Stable diffusion 27, and Taiyi Diffusion8(which is the largest Chinese diffusion model available so far). Note that Stable diffusion and Stable diffusion 2 mainly support English text inputs. Hence, we leverage the Chinese-English translation model (Wei et al., 2022) to translate our texts to English. The results are shown in Table 1. It can be seen that *Rapid Diffusion* outperforms all counterparties over the three datasets, achieving the average FID score at 21.90. The results indicate that our knowledge-enhanced models over domain-specific scenarios understand domain knowledge better and can generate more realistic and varied images. 5https://tianchi.aliyun.com/muge 6https://huggingface.co/CompVis/ stable-diffusion-v-1-4-original 7https://huggingface.co/stabilityai/ stable-diffusion-2 8https://huggingface.co/IDEA-CCNL/ Taiyi-Stable-Diffusion-1B-Chinese-v0.1 ![4_image_0.png](4_image_0.png) ## 4.3 Effectiveness Of Knowledge-Enhanced Chinese Clip As CLIP models aim to learn cross-modal representations, we first intrinsically evaluate our model by text-image retrieval. We compare the vanilla Chinese CLIP model and our CKCLIP model using the same pre-training text-image corpus. Pretraining details can also be found in the appendix. For evaluation, we employ the standard split of Flickr30K-CN (Lan et al., 2017), and then finetune both models. Table 2 reports the text-to-image and text-to-image retrieval results over the testing set. Our CKCLIP model improves retrieval performance by significant margin (especially for R@1 metric), showing its ability to learn cross-modal representations. In addition, we provide some qualitative results from the Cuisine dataset to show how more entity knowledge can lead to better representation and generation of the key objects in images, as shown in Figure 3. Model Text-to-image Image-to-text R@1 R@5 R@10 R@1 R@5 R@10 CLIP 83.3 97.3 99.5 70.1 91.9 96.4 CKCLIP (ours) 90.0 98.7 99.7 75.0 93.6 96.5 Table 2: Performance of the knowledge-enhanced CLIP for text-image retrieval in terms of Recall@1/5/10. ![4_image_1.png](4_image_1.png) ## 4.4 Results Of Inference Speedup For the implementation of compilation optimization, we employ BladeDISC (Zhu et al., 2021) as our underlying AI compiler, which is an end-toend dynamic shape compiler for machine learning workloads. Results of the comparison between our implementation and Torch Native (eager mode) are displayed in Table 3. According to the results, U-Net inference takes the longest in the entire process (in 1900ms), while the text encoder and the image decoder take only 0.012ms and 0.04ms, respectively. However, with the optimization by BladeDISC, we speed up the inference time of the text encoder, U-Net and the image decoder by ×3, ×1.91, and ×1.9 times compared to the eager mode. Additionally, FlashAttention assists us in further optimizing U-Net, which decreases the inference time from 994ms to 759ms. Finally, we are able to generate images more quickly. Note that the underlying GPU used in the experiments is NVIDIA A100 (80GB), and the scheduler runs 50 in steps. | Inference Setting | CLIP (ms) | U-Net (ms) | Decoder (ms) ESRGAN (ms) | Total (ms) | | |---------------------|-------------|--------------|----------------------------|--------------|-------| | Torch Native | 0.012 | 1900 | 0.04 | 54.5 | 3129 | | Ours (w/o. FA) | 0.004 | 994 | 0.02 | - | 2042 | | Ours (w/ FA) | - | 759 | - | - | 1807 | | Acceleration ratio | ×3 | ×1.91 | ×1.90 | - | ×1.73 | Table 3: Inference speedup results of *Rapid Diffusion*. FA denotes "FlashAttention". ## 4.5 Results Of Super-Resolution For image super-resolution, the ESRGAN-based network can be efficiently leveraged to achieve upscaling results. We can directly use the ESRGANbased network following the latent diffusion model because it has been pre-trained on common-used image datasets such as DIV2K (Agustsson and Timofte, 2017). However, considering the uniqueness and consistency of domain-specific images, we conjecture that fine-tuning enables the model to perform better. Experiments show that after finetuning, the model beats the pre-trained model according to our qualitative and quantitative results, which achieves 23.1 in terms of Peak Signal to Noise Ratio9, while the pre-trained model achieves only 22.7. Figure 4 further compares images with and without our pre-trained/fine-tuned models. ## 4.6 Case Studies We provide more cases from each application domain to show how much our model outperforms previous ones. Refer to Figure 5 in the appendix. 9https://en.wikipedia.org/wiki/Peak_ signal-to-noise_ratio ![5_image_0.png](5_image_0.png) ## 5 Applications In this section, we demonstrate the practical values of *Rapid Diffusion* by industrial use case and the integration to Alibaba Cloud PAI (Machine Learning Platform for AI). ## 5.1 Industrial Use Case Here, we briefly discuss two real-world use cases. The first is a fashion design for e-commerce manufacturers. The inputs to our system consist of keywords for multiple elements, such as trend, fabric, color and style. An automatic prompt generation process is called to provide TIS models naturallanguage-like inputs. For a single request of fashion design, a handful of prompts can be generated, each associated with multiple generated images. The images are then regarded as materials for designers. The cuisine dataset described previously is from our online food delivery and local life service platform. Our diffusion model for cuisine generation provides the inspiration functionality to help service providers to create innovative menus where users can select or freely enter all kinds of foodrelated keywords to generate images. Note that the images will be marked as "AI-generated" before they are sent to our applications. ## 5.2 Integration To Ai Platform To allow users to create their models, we have integrated *Rapid Diffusion* into a cloud-native AI platform to facilitate zero-code model training and elastic inference. For model training, after uploading training/validation datasets and checking hyperparameters, a training job is automatically submitted to our deep learning container, where the training command and the docker image have already been prepared. After the job is completed, the resulting model is available for deployment. Based on Query Per Second (QPS) requirements, our prediction service can scale to an adjustable number of machines in the cloud. We can call the TIS service via a RESTful API by HTTP requests. ## 6 Conclusion And Future Work We present the *Rapid Diffusion* framework for the training and deploying knowledge-enhanced, domain-specific, high-resolution, diffusion-based TIS models. Experimental results show the effectiveness of *Rapid Diffusion* in both image quality and inference speed, achieving an average FID score of 21.90 and ×1.73 acceleration ratio compared to all the counterparties. We further show its practical values through industrial use cases and the integration into an AI platform. In the future, we will extend the functionality of *Rapid Diffusion* and further increase the inference speed by advanced compilation optimization techniques. ## Ethical Considerations The techniques for inference speedup presented in this work are fully methodological. Hence, there are no direct negative social impacts. However, as the models automatically generate the images, they may have some negative impacts, such as the generation of toxic content and the existence of social biases. We suggest that the produced models should not be used to generate offensive or inappropriate images for people intentionally. Users should carefully deal with the potential risks by filtering out these images when the models are deployed online. ## Acknowledgements This work was partially supported by Alibaba Innovative Research Foundation (No. D8200510), NSFC (Grant No. 61936003), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (No. 2017ZT07X183), Zhuhai Industry Core and Key Technology Research Project (No. 2220004002350), and by Alibaba Cloud Group through Research Talent Program with South China University of Technology. ## References Jorge Agnese, Jonathan Herrera, Haicheng Tao, and Xingquan Zhu. 2020. A survey and taxonomy of adversarial neural networks for text-to-image synthesis. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(4):e1345. Eirikur Agustsson and Radu Timofte. 2017. Ntire 2017 challenge on single image super-resolution: Dataset and study. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 126–135. Zhengda Bian, Hongxin Liu, Boxiang Wang, Haichen Huang, Yongbin Li, Chuanrui Wang, Fan Cui, and Yang You. 2021. Colossal-ai: A unified deep learning system for large-scale parallel training. *CoRR*, abs/2110.14883. Antoine Bordes, Nicolas Usunier, Alberto GarcíaDurán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013, pages 2787–2795. John C Butcher. 2000. Numerical methods for ordinary differential equations in the 20th century. Journal of Computational and Applied Mathematics, 125(12):1–29. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. CoRR, abs/2205.14135. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In *Advances in Neural Information* Processing Systems, volume 27. Curran Associates, Inc. Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Hang Xu, Xiaodan Liang, Wei Zhang, Xin Jiang, and Chunjing Xu. 2022. Wukong: 100 million largescale chinese cross-modal pre-training dataset and A foundation framework. *CoRR*, abs/2202.06767. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. *Advances* in Neural Information Processing Systems, 33:6840– 6851. Jonathan Ho and Tim Salimans. 2021. Classifier-free diffusion guidance. In *NeurIPS 2021 Workshop on* Deep Generative Models and Downstream Applications. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. 2022. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems. Weiyu Lan, Xirong Li, and Jianfeng Dong. 2017. Fluency-guided cross-lingual image captioning. In Proceedings of the 25th ACM International Conference on Multimedia, page 1549–1557. Dan Li, Shuai Wang, Jie Zou, Chang Tian, Elisha Nieuwburg, Fengyuan Sun, and Evangelos Kanoulas. 2021. Paint4poem: A dataset for artistic visualization of classical chinese poems. arXiv preprint arXiv:2109.11682. Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. 2022a. Pseudo numerical methods for diffusion models on manifolds. In International Conference on Learning Representations. Tingting Liu, Chengyu Wang, Xiangru Zhu, Lei Li, Minghui Qiu, Jun Huang, Ming Gao, and Yanghua Xiao. 2022b. ARTIST: A transformer-based chinese text-to-image synthesizer digesting linguistic and world knowledge. In *Findings of the Association* for Computational Linguistics: EMNLP 2022, pages 881–888. Haoyu Ma, Handong Zhao, Zhe Lin, Ajinkya Kale, Zhangyang Wang, Tong Yu, Jiuxiang Gu, Sunav Choudhary, and Xiaohui Xie. 2022. EI-CLIP: entity-aware interventional contrastive learning for e-commerce cross-modal retrieval. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18030–18040. IEEE. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, pages 8748–8763. PMLR. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. *arXiv* preprint arXiv:2204.06125. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In International conference on machine learning, pages 1060–1069. PMLR. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pages 10684–10695. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer. Tim Salimans and Jonathan Ho. 2022. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR. Jiaming Song, Chenlin Meng, and Stefano Ermon. 2021. Denoising diffusion implicit models. In *International* Conference on Learning Representations. Chengyu Wang, Minghui Qiu, Taolin Zhang, Tingting Liu, Lei Li, Jianing Wang, Ming Wang, Jun Huang, and Wei Lin. 2022. Easynlp: A comprehensive and easy-to-use toolkit for natural language processing. In Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - System Demonstrations, pages 22– 29. Association for Computational Linguistics. Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. 2018. ESRGAN: enhanced super-resolution generative adversarial networks. In *Computer Vision -* ECCV 2018 Workshops, pages 63–79. Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, and Rong Jin. 2022. Learning to generalize to more: Continuous semantic augmentation for neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7930–7944. Zhen Zheng, Xuanda Yang, Pengzhan Zhao, Guoping Long, Kai Zhu, Feiwen Zhu, Wenyi Zhao, Xiaoyong Liu, Jun Yang, Jidong Zhai, et al. 2022. Astitch: enabling a new multi-dimensional optimization space for memory-intensive ml training and inference on modern simt architectures. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, pages 359–373. Kai Zhu, WY Zhao, Zhen Zheng, TY Guo, PZ Zhao, JJ Bai, Jun Yang, XY Liu, LS Diao, and Wei Lin. 2021. Disc: A dynamic shape compiler for machine learning workloads. In *Proceedings of the 1st Workshop on Machine Learning and Systems*, pages 89– 95. Xiaojin Zhu, Andrew B Goldberg, Mohamed Eldawy, Charles R Dyer, and Bradley Strock. 2007. A text-topicture synthesis system for augmenting communication. In *AAAI*, volume 7, pages 1590–1595. ## A Generated Results Of Case Study We show more generated images from our model and baselines, presented in Figure 5. ## B Data Statistics Table 4 shows the data statistics of our experiments. We divide the E-commerce and Chinese Painting datasets into training, validation and testing sets according to the ratio of 80%, 10%, and 10%. For the Cuisine dataset, we divide 10% for validation, 10 thousand images for testing and the rest for training. Among these datasets, E-commerce and Chinese Painting are public datasets, while Cuisine is an inhouse dataset provided by our online food delivery and local life service platform. | Domain | #Train | #Valid | #Test | Sum | |------------|----------|----------|---------|--------| | E-commerce | 75973 | 9497 | 9497 | 94967 | | CP | 71362 | 8921 | 8921 | 89204 | | Cuisine | 804305 | 89367 | 10000 | 903672 | Table 4: The statistics of three datasets used in the experiments. CP denotes "Chinese Painting" ## C Hyper-Parameters Settings For knowledge-enhanced CLIP pre-training, we follow the hyper-parameter settings in (Gu et al., 2022). For training the latent diffusion model, we set the learning rate as 5 × 10−5, the batch size as 80, and the image size as 256 × 256. The latent dimension of the auto-encoder is 32 × 32. The hidden dimension of the text-encoder is 768. For fine-tuning the super-resolution model, we obtain low-resolution images by down-sampling high-resolution images using the bi-cubic kernel function. Different from the original two-stage training process, we directly employ the pre-trained ESRGAN model as an initialization for the generator and the discriminator. We use Adam with β1 = 0.9, β2 = 0.999. The batch size is set to 16 and the learning rate is set to 1 × 10−4and halved at [50k, 100k, 200k, 300k] iterations. During model training, all the experiments are conducted on a single server with 8 NVIDIA A100 GPUs (80G). ## C.1 Hyper-Parameters Of Model Architectures Table 5 shows the model sizes of all the experiment models, including Stable Diffusion, Stable Diffusion 2, Taiyi Diffusion and our *Rapid Diffusion* model. Compared with the other three baselines, Rapid diffusion is the most compact model with better performance in our scenarios. Table 5: The numbers of parameters of all the experiment models. | Model | #Params | |--------------------|-----------| | Stable Diffusion | 1.37B | | Stable Diffusion 2 | 1.29B | | Taiyi Diffusion | 1.35B | | Rapid Diffusion | 1.06B | We further provide the detailed settings of the entire *Rapid Diffusion* model pipelines in Table 6. Table 6: Detailed parameter settings of *Rapid Diffusion*. | #Params | Value | |----------------------------------|---------------------| | CLIP Text Encoder context length | 32 | | vocab size | 21128 | | embedding dimension | 768 | | layers | 12 | | width | 768 | | heads | 12 | | Autoencoder z-channel | 4 | | resolution | 256 | | in-channels | 3 | | out-channels | 3 | | channels | 128 | | channel multiplier | 1,2,4,4 | | U-Net image size | 32 | | in-channels | 4 | | out-channels | 4 | | model channels | 320 | | attention resolutions | 4,2,1 | | channel multiplier | 1,2,4,4 | | context dimension | 768 | | number heads | 8 | | transformer depth | 1 | | ESRGAN-Generator type | RRDBNet | | in-channels | 3 | | out-channels | 3 | | hidden features | 64 | | number blocks | 23 | | grow channels | 32 | | ESRGAN-Discriminator type | UNetDiscriminatorSN | | in-channels | 3 | | hidden features | 64 | | skip connection | True | ![9_image_0.png](9_image_0.png)
khandelwal-etal-2023-large
Large Scale Generative Multimodal Attribute Extraction for {E}-commerce Attributes
https://aclanthology.org/2023.acl-industry.29
E-commerce websites (e.g. Amazon, Alibaba) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often don{'}t label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using MXT, that consists of three key components: (i) MAG (Multimodal Adaptation Gate), (ii) Xception network, and (iii) T5 encoder-decoder. Our system consists of a generative model that generates attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NER-based models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets (total 57 attributes) show that our framework improves the absolute recall@90P by 10.16{\%} and 6.9 from the existing state of the art models. In a popular e-commerce store, we have productionized our models that cater to 12K (product-type, attribute) pairs, and have extracted 150MM attribute values.
# Large Scale Generative Multimodal Attribute Extraction For E-Commerce Attributes Anant Khandelwal ∗ Ads Trust Amazon anantkha@amazon.com Shreyas Sunil Kulkarni International Machine Learning Amazon kulkshre@amazon.com ## Abstract E-commerce websites (e.g. Amazon) have a plethora of structured and unstructured information (text and images) present on the product pages. Sellers often either don't label or mislabel values of the attributes (e.g. color, size etc.) for their products. Automatically identifying these attribute values from an eCommerce product page that contains both text and images is a challenging task, especially when the attribute value is not explicitly mentioned in the catalog. In this paper, we present a scalable solution for this problem where we pose attribute extraction problem as a question-answering task, which we solve using MXT, consisting of three key components: (i) MAG (Multimodal Adaptation Gate), (ii) Xception network, and (iii) T5 encoder-decoder. Our system consists of a generative model that *generates* attribute-values for a given product by using both textual and visual characteristics (e.g. images) of the product. We show that our system is capable of handling zero-shot attribute prediction (when attribute value is not seen in training data) and value-absent prediction (when attribute value is not mentioned in the text) which are missing in traditional classification-based and NERbased models respectively. We have trained our models using distant supervision, removing dependency on human labeling, thus making them practical for real-world applications. With this framework, we are able to train a single model for 1000s of (product-type, attribute) pairs, thus reducing the overhead of training and maintaining separate models. Extensive experiments on two real world datasets show that our framework improves the absolute recall@90P by 10.16% and 6.9% from the existing state of the art models. In a popular e- ∗This work was done while author was in International Machine Learning team. Happy Mittal International Machine Learning Amazon mithappy@amazon.com Deepak Gupta International Machine Learning Amazon dgupt@amazon.com commerce store, we have deployed our models for 1000s of (product-type, attribute) pairs. 1 Introduction E-commerce websites (e.g. Amazon, Alibaba) have a very wide catalog of products. Seller provided catalog of these products contain both textual information and product images. Apart from this unstructured information, they also provide structured information about the products such as color, material, size, etc. This information can be represented in terms of attribute-value pairs (see figure 1). In this paper, we will use the terms attribute and attribute-name interchangeably. The value of attribute will be referred as *attribute-value*. However, while listing the products, sellers rarely specify all attribute values or mistakenly fill incorrect values. These attribute values may or may not be present in the unstructured textual product information. Extracting/inferring the missing attribute values from the unstructured textual product information (and images) can improve the catalog quality, thereby improving the customer experience (again, refer figure 1 for an example of attribute extraction). Figure 1: Illustration of attribute extraction problem ![0_image_0.png](0_image_0.png) PT-attribute: A PT-attribute is defined as a pair of(product-type, attribute), where product-type (or PT) is a broad category of products (e.g. "shoes", 305 "dress", "laptops" etc.) and attribute is an attributename (e.g. "color", "size" etc.). Typically, attributeextraction is done at the granularity of PT-attribute (e.g. "extract the value of *color* attribute of *shoe*"). A good attribute extraction system has following desirable properties: (1) **Scalability:** A single model should handle multiple PT-attributes so that there is no need to train a separate model for every PT-attribute combination, (2) **Multi-modality:** Model should be able to extract attributes from multiple modalities like text, image, video etc., (3) Zero-shot inference: Model should be able to extract attribute values that were not seen in the training data, and (4) **Value-absent inference:** Model should extract attribute values that are not explicitly mentioned in the text on the product page (but can be inferred from image or some other reasoning). Related Work: Extensive research has been done to build attribute extraction models, which can be categorized as extractive, *predictive*, or *generative*. Extractive models pose this problem as a Named Entity Recognition (NER) problem (Zheng et al., 2018). Some of the recent work in this space include LATEX-numeric (Mehta et al., 2021), and MQMRC (Shrimal et al., 2022b) . However, these models don't do value-absent inference. Moreover, these are text based models and do not use product images. Predictive models are the classifier models that take text (and image) as input and predict the attribute values. CMA-CLIP (Liu et al., 2021) is a recent multi-modal predictive framework for predicting attribute values. However, these models can't do zero-shot inference as the prediction comes from the predefined classes only. Generative models pose this problem as an answer generation task given a question and context. Here, the question is the attribute name, and context is the product data (text and image), and the answer is the attribute value. For example, Roy et. al. (Roy et al., 2021) presented a generative framework to generate attribute values using product's text data. PAM (Lin et al., 2021) introduced a multi-modal generative framework, however their model requires (i) Training encoder and decoder from scratch, (ii) Manually modifying the vocabulary of outputs (attributevalues) for different product-types. In this paper, we present MXT, a multimodal generative framework to solve the attribute extraction problem, that consists of three key components: (i) MAG (Multimodal Adaptation Gate) (Rahman et al., 2020b): a fusion framework to combine textual and visual embeddings, that enables generating image-aware textual embeddings, (ii) Xception network (Chollet, 2017): an image encoder that generates attribute-aware visual embeddings, and (iii) T5 encoder-decoder (Raffel et al., 2020). The models trained by our generative framework are scalable as a single model is trained on multiple PT-attributes, thus reducing the overhead of training and maintaining separate models. We remove the disadvantages of PAM model by (i) finetuning a strong pre-trained language model (T5 (Raffel et al., 2020)) and thus leveraging its text generation ability, (ii) providing product-type in the input itself so that output distribution is automatically conditioned on the PT. Moreover, our trained model satisfies all of the 4 desirable properties that were mentioned previously. Our system formulates the attribute extraction problem as a question-answering problem, where (a) question is the attribute name (e.g. "color"), (b) textual context comprises of a concatenation of product-type (e.g. "shirt"), and textual description of the product, (c) visual context comprises product image, and (d) answer is the attribute value for the attribute specified in the question. Our model architecture consists of (i) a T5 encoder to encode the question and textual context, (ii) encoding visual context into product specific embeddings through a pre-trained ResNet-152 model (He et al., 2016) and fusing them with T5's textual embeddings using a multimodal adaptation gate (MAG) (Rahman et al., 2020a), (iii) encoding visual context into attribute (e.g. "sleeves", "collar" etc.) specific embeddings through Xception model (Chollet, 2017) and fusing them with previously fused embeddings through a dot product attention layer (Yu et al., 2021), and finally (iv) generating the attribute values through T5 decoder. The detailed architecture of our system is shown in figure 2. In section 2, we explain our proposed model MXT. In section 3, we compare our model's performance with NER-Based MQMRC (Shrimal et al., 2022a) along with a popular multi-modal model CMA-CLIP (Liu et al., 2021) and show that on same precision, we outperform them (on recall) for a majority of the attributes. We also show an ablation study justifying the proposal of different components in MXT. Finally, we also show that our model is able to perform zero-shot and valueabsent inference. Our trained models using MXT framework are being used to extract attributes for over 12000 PT-attributes in a popular e-commerce store, and have extracted more than 150MM attribute values. ## 2 Mxt Framework Given a set of product-types (PTs) P = {p1, p2*, . . . , p*m} and attribute-names A = {a1, a2*, . . . , a*n}, we define MXTP,A as a multiPT, multi-attribute, and multi-modal generative model that is trained on PT-attributes from (P, A), and can be used to generate attribute value for any product in the trained PT-attribute set. The overall architecture of our model is described in figure 2. ## 2.1 Problem Formulation We formulate the problem of attribute extraction as the problem of answer generation given a question and a context. Here question is the attribute-name a ∈ A, and context consists of textual description, product type p ∈ P and image of the product. All of these are used to extract attribute values. The answer generated from the model is the attribute value for a. As shown in figure 2, our model architecture mainly consists of 3 components: (a) Image-aware Text encoder, (b) Attribute-aware Text-Image Fusion, and (c) Text decoder. Below, we describe each component in detail. ## 2.2 Image-Aware Text Encoder We use T5 (Raffel et al., 2020), which is a transformer (Vaswani et al., 2017) based text only Seq2Seq pretrained language model. It includes a bidirectional encoder and a unidirectional (forward only) decoder. In this section, we give an overview of T5's encoder and details of its usage for our task. Our text input consists of (i) attribute-name (e.g. "color"), (ii) product-type (e.g. "dress"), and (iii) textual description of product. In our QnA format, the question consists of attribute-name, and context consists of concatenation of product-type and textual description of the product. We tokenize both question and context and create a single input sequence of tokens. This input sequence x is then fed to an embedding and positional encoding layer to create input features Temb ∈ R N×d, where N is the sequence length and d is the feature dimension. These input text embeddings are then fused with Multimodal Adaptation Gate (MAG) as described in Rehman et. al. (Rahman et al., 2020b) to generate image aware text embeddings. Due to MAG, the internal representation of words (at any transformer layer) is shifted conditioned on visual modalities. This attachment essentially puts words into a different semantic space, which is conditioned on the visual inputs. For e.g., the meaning of the word "ripple" changes according to the visual input soap image or paper image. With soap, the meaning is "free and clear", while with paper, the meaning is "wavy pattern" as shown in figure 3. This module shifts the meaning of "ripple" according to visual modality. Since T5 is pretrained model and can understand only text embeddings it is required to fuse the visual embeddings (VR ∈ R d) with text before feeding it to T5 Encoder rather than feeding the visual embeddings along with text. Specifically, in MAG, for each input token i of the sequence, we first learn a gating vector gi using concatenated embeddings of T i emb and VR: gi = *RELU*(Wg[T i emb; VR] + bg). This gating vector highlights the relevant information in visual modality conditioned on the input textual vector. We then create an image displacement vector Hi by multiplying VR with each token's gating vector gi: Hi = gi· (WHVR) + bH. Finally, we shift the embedding T i emb by the weighted displacement vector Hito get the multimodal vector Tˆi emb = T i emb + α ∗ Hi. In this equation, α = min( ||T i emb||2 ||Hi||2∗β, 1), where β is a hyper-parameter whose value is taken as it is from the paper (Rahman et al., 2020b). This is then passed through a layer normalization followed by a dropout layer to get the final fused embedding FMAG from MAG module, where F i MAG = dropout(LN(Tˆi emb)). This fused output is then fed to the T5 encoder. The encoder consists of L encoder-layers. It takes FMAG as input gives Tenc as output. Equation 1 shows the encoding done by k th layer. Here SA is the multi-head self attention layer, Res is the residual connection, LN is the layer normalization, and FC is a fully connected layer. T k enc = LN(Res(FC(LN(Res(SA(T $$\begin{array}{l}{{\pi k-1}}\\ {{e n c}}\end{array}$$ ## (1) 2.3 Attribute-Aware Text-Image Fusion Xception(Chollet, 2017) model performs depthwise (or channel-wise) separable convolutions, i.e., it applies separate filters for different color channels. We propose another fusion layer based on the Xception network. The advantage of using this is that it can readily learn the visual features conditioned on the attribute type. For example, for the attribute "sleeve type" of a dress, it can iden- ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) tify the channel/color difference between sleeves of dress and skin of the person, thus identifying whether sleeve is half or full. We then fuse the text and image embeddings using multi-head cross attention. As shown in figure 2 (b), a product image has several regions of interest, for different attributes like "neck style" and "sleeve type". This region specific embeddings are learnt by separable convolutions in Xception which is then attended with text embeddings to arrive at attribute aware text embeddings. Now given text embedding Tenc E R N × d and image embedding V X E R 1 × x (from MXT), we create an attribute-aware fused embedding F A ∈ R N× d (having same dimension as of text embedding). This fused embedding is created through a multi-head cross attention module, that applies cross attention between textual and visual embeddings as shown in figure 2. This fusion has an advantage that for an attribute, different attention scores can be learned for each object of an image, allowing attending to specific portions of the product image conditioned on the attribute name in the question. For example, for the product type "shirt" and attribute "sleeve-type", we may want to concentrate only on the portions of the image where sleeves are visible. ## 2.4 Text Decoder We use T5's unidirectional decoder to output the attribute values. The input to the decoder is the fused embedding vector F A =< F A , F A , . . . , F N A. The decoder iteratively attends to previously generated tokens y<j (via self-attention) and FA (via cross-attention), then predicts the probability of future text tokens P 0 ( y j y <j, x, I ) Dec( y<j, FA ). For attribute generation, we finetune our model parameters θ by minimizing the negative log-likelihood of label text y tokens given input text x and image I : LGEN Lj=1 logPo( yj | y<j, x, I ). ## 3 Experimental Setup & Results 30PT Dataset: We picked 30 product types (PTs) consisting of total 38 unique attributes from a popular e-commerce store. For each product in the dataset, we have textual information and image. The dataset has 569k and 84k products in train and validation data across 30 PTs. Our test data consists of products from two product types with a total of 73k products. | PT | #top | CMACLIP | MXT | | | | | |------------|----------|-----------|---------|---------|---------|---------|---------| | attributes | Multi-PT | Single PT | WithoutXception | WithoutMAG | PT | MXT | | | A | +15.56% | | | | | | | | K=5 | +6.16% | +22.33% | +19.58% | +21.82% | +20.93% | | | | A | K=10 | +6.70% | +16.89% | +15.50% | +15.60% | +15.19% | | | K=15 | +1.81% | +13.23% | +11.64% | +10.67% | +10.45% | B | -1.47% | | C | -7.89% | | | | | | | | D | +9.98% | | | | | | | | K=5 | +8.34% | +16.63% | +12.86% | +13.94% | +13.58% | | | | B | K=10 | +18.46% | +24.98% | +22.46% | +22.81% | +22.55% | | | K=15 | +11.72% | +18.51% | +15.50% | +16.28% | +15.68% | E | +13.23% | We evaluated MXT against two state of the art methods on attribute extraction: (1) **CMA-CLIP:** A multi-task classifier that uses CLIP (Radford et al., 2021) for learning multi-modal embeddings of products followed by using two types of crossmodality attentions: (a) sequence-wise attention to capture relation between individual text tokens and image features, and (b) modality-wise attention to capture weightage of text and image features relative to each downstream task, (2) **NERMQMRC:** This framework (Shrimal et al., 2022b) poses Named Entity Recognition (NER) problem as Multi Question Machine Reading Comprehension (MQMRC) task. This is the state of the art model for the text-based attribute extraction task. In this model, given the text description of a product (*context*), they give attribute names as *multiple* questions to their BERT based MRC architecture, which finds span of each attribute value *answer* from the context. Left table in the figure 1 compares the recall@90P% of the three models. We show the performance on top-5, top-10 and top-15 attributes (by number of products in which they are present. We can see that MXT outperforms MQMRC and CMA-CLIP on both product types. E-commerce5PT: This is a benchmark dataset from NER-MQMRC paper (Shrimal et al., 2022b). We take a subset of this dataset (removing numerical attributes) consisting of 22 product-attributes across 5 product types. This is a benchmark dataset for NER based models since all attribute values are present in the text in this dataset. The dataset has 273,345 and 4,259 products in train and test data respectively. We compare average F1 scores (averaged across attributes for each product type) of MXT model with NER-MQMRC on this dataset where our model outperforms NER-MQMRC on 16/22 attributes. Right table in the figure 1 shows the average F1-scores (across attributes in each product type) of MXT and NER-MQMRC models. ## 3.1 Ablation Study We show three ablation studies on 30PT dataset that justify our choices in the MXT architecture. Left table in the figure 1 shows the results of these studies. **(a) Scalability:** We show that our proposed framework is highly scalable. For that, we compute Recall@90P% of the MXT model trained on individual PTs. The results show that (i) our model leverages cross-PT information during training, (ii) we don't need to train separate model for each PT, which makes model monitoring and refreshing easier in the production, **(b) Xception network:** We show that Xception network helps concentrating on certain attribute features. For this, we removed the Xception network from our architecture and trained and evaluated the model, **(c) MAG:** We replaced MAG with simple concatenation of text and image embeddings in MXT. We can see in the table that each of our ablation model under-performs the MXT model trained on 30PTs, thus justifying our design choices. ## 3.2 Zero-Shot Inference And Value-Absent Inference Most existing methods for attribute extraction face two challenges: **(i) Zero-shot inference:** All the predictive models (classification-based models) can predict attribute values only from a predefined set of values that are seen in the training data. They are unable to do zero-shot inference i.e. they can't predict an attribute value if it is not seen in the training data, **(ii) Value-absent inference:** All NER- based models can extract values only which are mentioned in the text data i.e. if an attribute value is absent in the input text, they can't extract that value. Our generative model solves both of these challenges. For example, in the E-commerce5PT dataset, there are a total of 8289 product-attribute pairs in the test data, out of which 970 productattribute pairs were not seen in the training data, from which our model correctly generated 124 product-attribute pairs. For example, given a product of product-type *"dress"* with title "Tahari ASL Women's Sleeveless Ruched Neck Dress with Hi Lo Skirt", our model generated the value "Ruched Neck" for the attribute *"neck style"*. Here the value "Ruched Neck" was absent from the training data. Similarly, for the *"dress"* product shown in figure 1 , our model generated the value *"mini"* for the attribute *"item length"* (by inferring it from the image) even when this value is not mentioned in the product text(thus solving the second challenge). ## 3.3 Training & Inference Details We conducted training for each model over a span of 20 epochs, employing a batch size of 4. The training process was performed using distributed multi-GPU training across 8 V-100 Nvidia GPUs, each equipped with 16GB of memory. For text encoder and decoder, we finetune the pretrained t5-base 1checkpoint. We obtained ResNet-based image embeddings using a pretrained ResNet-152, specifically with one embedding assigned to each image. 2. During training, we employed the Adam optimizer with learning rate of 5e−5and warmup ratio of 0.1. We chose the checkpoint having best validation loss. For inference, we used greedy search to generate attribute values. ## 4 Deployment In a popular e-commerce store, we have deployed MXT for 6 English speaking markets covering >10K PT-attributes and have extracted >150MM attribute values. Design Choices: In popular e-commerce stores, usually there are more than 100K PT-attributes across various markets. Earlier models like NERMQMRC or CMA-CLIP could be trained only for few 100s of PT-attributes. NER-MQMRC (Shrimal et al., 2022a) architecture only allowed one product type in one model training, while CMA-CLIP couldn't scale beyond few 100s of PT-attribute pairs due to network explosion (as they had to create an output layer for each of the different attribute value). This had serious issues of monitoring, refreshing and maintaining the quality of models. Our prompt-based approach in MXT allows us to train a single model checkpoint for any number of PT-attribute pairs. Practical Challenges: We faced several challenges during building and deploying the model. One of the biggest challenge was lack of normalized attribute values. Since we were relying on the distantly supervised training data from the catalog, there were multiple junk values. Normalizing these values is challenging without the support of annotations. To overcome this problem, we used some heuristic matches to merge similar values. We also trimmed the tail attribute values to remove the junk values further. The second major challenge was to evaluate the model and find the threshold for every PAC to achieve the desired precision. Since we had >10K PT-attributes, even if we annotate 300 samples per PT-attribute, it leads to 3MM annotations, which is not feasible. For that, we evaluated the model automatically using the catalog data. Since the catalog data can be noisy, we checked other things like whether the predicted value is present in text, whether the attribute should allow zero-shot prediction etc. Based on these checks, we decided the required precision accordingly. ## 5 Conclusion & Future Work In this paper, we presented MXT, a large scale multi-modal product attribute generation system to extract product attributes from the products listed in eCommerce stores. Our model infers the attribute values using both textual and visual information present on the product pages. We introduced a novel architecture comprising a T5 based encoder and decoder along with two fusion layers to fuse text and image embeddings. We showed our model can beat the existing state of the art extractive as well as predictive models on the benchmark datasets. Our model is scalable to multiple product types and countries by just specifying them in the input text prompt. We further showed that our model is able to perform zero-shot inference, as well as it can generate attribute values not present in the text. There are several future directions to explore which can further improve the performance of our model. First, we would like to create an ensemble of NER-based and generative models so that we can leverage the power of extraction based models which work very well for numerical attributes (e.g. size, length etc.). Second, our current approach does not use relational information among the products. Since similar products can have common attribute values, we can use graph based approaches to capture that relational information. Specifically, we can approach the attribute extraction problem through either link prediction or node classification. In the former method, we aim to predict missing links between products and their attributes. Alternatively, the latter approach involves using similarity between product features, including text, images, and co-viewing information, to determine graph edges for classification of product nodes. ## 6 Limitations In this section, we discuss some of the limitations of our current model architecture: (1) **NonEnglish locales:** Currently in our experiments, we have trained and evaluated models only on English datasets. Building models on non-English locales is the direction for future work, (2) **Use of pretrained tokenizer:** The T5's tokenizer in our models has been pre-trained on open-domain datasets, and its vocabulary misses out on e-commerce specific terms. For example, the current tokenizer of T5 tokenizes the phrase "skater midi dress" as ["sk", "a", "ter", "mid", "I", "dress"]. Here, the meaning of words "skater" and "midi" is not captured in the tokenized text. We believe that we can overcome this limitation by pre-training T5 on e-commerce data which would help tokenizer understanding and tokenizing the e-commerce specific terms more correctly. ## 7 Acknowledgements We thank all the anonymous reviewers for providing their valuable comments that helped us improve the quality of our paper. We also thank our colleagues in the science, product, and engineering teams at Amazon for their valuable inputs. ## References François Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251–1258. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition supplementary materials. In *IEEE conference on* computer vision and pattern recognition, pages 770– 778. Rongmei Lin, Xiang He, Jie Feng, Nasser Zalmout, Yan Liang, Li Xiong, and Xin Dong. 2021. Pam: Understanding product images in cross product category attribute extraction. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien Wang, Bryan Wang, and Yi Sun. 2021. Cma-clip: Cross-modality attention clip for imagetext classification. *ArXiv*, abs/2112.03562. Kartik Mehta, Ioana Oprea, and Nikhil Rasiwasia. 2021. Latex-numeric: Language agnostic text attribute extraction for numeric attributes. In *NAACL*. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *ICML*. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683. Wasifur Rahman, M. Hasan, Sangwu Lee, Amir Zadeh, Chengfeng Mao, Louis-Philippe Morency, and Ehsan Hoque. 2020a. Integrating multimodal information in large pretrained transformers. Proceedings of the conference. Association for Computational Linguistics. Meeting, 2020:2359–2369. Wasifur Rahman, Md Kamrul Hasan, Sangwu Lee, AmirAli Bagher Zadeh, Chengfeng Mao, Louis-Philippe Morency, and Ehsan Hoque. 2020b. Integrating multimodal information in large pretrained transformers. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2359–2369, Online. Association for Computational Linguistics. Kalyani Roy, Pawan Goyal, and Manish Pandey. 2021. Attribute value generation from product title using language models. In *Proceedings of the 4th Workshop on e-Commerce and NLP*, pages 13–17, Online. Association for Computational Linguistics. Anubhav Shrimal, Avi Jain, Kartik Mehta, and Promod Yenigalla. 2022a. NER-MQMRC: Formulating named entity recognition as multi question machine reading comprehension. In *Proceedings of the 2022* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, pages 230– 238, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. Anubhav Shrimal, Avi Rajesh Jain, Kartik Mehta, and Promod Yenigalla. 2022b. Ner-mqmrc: Formulating named entity recognition as multi question machine reading compmrehension. *ArXiv*, abs/2205.05904. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762. Tiezheng Yu, Wenliang Dai, Zihan Liu, and Pascale Fung. 2021. Vision guided generative pre-trained language models for multimodal abstractive summarization. *ArXiv*, abs/2109.02401. Guineng Zheng, Subhabrata Mukherjee, Xin Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. *Proceedings of the 24th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
avigdor-etal-2023-consistent
Consistent Text Categorization using Data Augmentation in e-Commerce
https://aclanthology.org/2023.acl-industry.30
The categorization of massive e-Commerce data is a crucial, well-studied task, which is prevalent in industrial settings. In this work, we aim to improve an existing product categorization model that is already in use by a major web company, serving multiple applications. At its core, the product categorization model is a text classification model that takes a product title as an input and outputs the most suitable category out of thousands of available candidates. Upon a closer inspection, we found inconsistencies in the labeling of similar items. For example, minor modifications of the product title pertaining to colors or measurements majorly impacted the model{'}s output. This phenomenon can negatively affect downstream recommendation or search applications, leading to a sub-optimal user experience. To address this issue, we propose a new framework for consistent text categorization. Our goal is to improve the model{'}s consistency while maintaining its production-level performance. We use a semi-supervised approach for data augmentation and presents two different methods for utilizing unlabeled samples. One method relies directly on existing catalogs, while the other uses a generative model. We compare the pros and cons of each approach and present our experimental results.
# Consistent Text Categorization Using Data Augmentation In E-Commerce Guy Horowitz1,∗ , Stav Yanovsky Daye2, Noa Avigdor-Elgrabli2, and Ariel Raviv2 1Technion - Israel Institute of Technology 2Yahoo Research guy.h@campus.technion.ac.il {stav.yanovsky,noaa,arielr}@yahooinc.com ## Abstract The categorization of massive e-Commerce data is a crucial, well-studied task, which is prevalent in industrial settings. In this work, we aim to improve an existing product categorization model that is already in use by a major web company, serving multiple applications. At its core, the product categorization model is a text classification model that takes a product title as an input and outputs the most suitable category out of thousands of available candidates. Upon a closer inspection, we found inconsistencies in the labeling of similar items. For example, minor modifications of the product title pertaining to colors or measurements majorly impacted the model's output. This phenomenon can negatively affect downstream recommendation or search applications, leading to a sub-optimal user experience. To address this issue, we propose a new framework for consistent text categorization. Our goal is to improve the model's consistency while maintaining its production-level performance. We use a semi-supervised approach for data augmentation and presents two different methods for utilizing unlabeled samples. One method relies directly on existing catalogs, while the other uses a generative model. We compare the pros and cons of each approach and present our experimental results. ## 1 Introduction In the last two decades, widespread use of ecommerce platforms such as Amazon and eBay has contributed to a substantial growth in online retail. Such platforms rely on both explicit and implicit product features in order to deliver a satisfying user experience. There, the inferred product category is typically a crucial signal for many application such as browsing, search and recommendation. We focus on improving an existing product categorization model, we refer to as 'the categorizer', ∗The work was carried out during an internship at Yahoo Research. that is employed by our company for fast categorization of billions of items on a daily basis. It classifies e-commerce items, such as products or deals, based on a predefined hierarchy of categories, namely GPT (Google Product Taxonomy). Given a product title, the categorizer assigns the most appropriate label in the taxonomy. The model itself is highly scalable and effective, so it is well-suited for settings with large and rapidly growing item catalogs. In our company, the categorizer is used as a standalone component in various e-commerce related services, such as recommendation, search, and ad ranking. A recent examination of the categorizer's output revealed inconsistencies in the labeling of similar items. It was evident that in some cases small variations in product titles, such as those relating to colors or measurements, significantly affect the categorizer's output. This inconsistency negatively impacts search and recommendation algorithms that rely on the inferred category, leading to a poor user experience. The concept of consistency in NLP tasks has been studied in various research works, including robustness to paraphrasing (Elazar et al., 2021) and robustness to adversarial attacks (Jin et al., 2020; Wang et al., 2020). Other works relate consistency issues with the misuse of spurious features during the learning phase (Arjovsky et al., 2019; Veitch et al., 2021; Wang et al., 2021). When examining the performance of the categorizer in terms of accuracy alone, the inconsistency issue may be overlooked. But, since many recommendation pipelines depend on the output of the product categorizer, an inconsistent model can have severe implications on the user experience. In most cases, the differences include returning the parent category or a sibling category, rather than a completely different category path. To tackle this inconsistency problem, we use different *data augmentation* techniques and enrich 313 the training data with item versioning, leading to a more consistent model. Data augmentation for improving various NLP tasks has been widely studied and surveyed (Shorten et al., 2021), and particularly in the context of consistency (Xie et al., 2020). Generating such data, both manually (Kaushik et al., 2019) and automatically (Rizos et al., 2019; Bari et al., 2020; Kumar et al., 2020), has shown to contribute to the robustness of learnt models in different settings. We chose to use data augmentation, without changing the current architecture of the already-in-use product categorizer for two main reasons. First, for scalability reasons, any change in the architecture might degrade the model's ability to infer the categories of billions of items per day. Second, maintaining the current model architecture expedites the productization process and requires only minimal engineering effort. This work defines a new framework, *Consistent* Semi-Supervised Learning (Consistent-SSL), for consistent text categorization in the context of e-commerce (Section 2). We use an unlabeled clustered dataset as a source of legit item versioning. The dataset is derived from product catalogs, and includes clusters of different versions of items. We present two different methods to utilize this unlabeled clustered data: a self-training method and a generative approach (Section 3). We describe the datasets and the experimental framework we use for the evaluation of the proposed methods (Section 4). Finally, we detail results, showing an improvement in the consistency rate of 4-10% above the baseline model, and discuss the advantages and weaknesses of each method (Section 5). ## 2 Consistent Semi-Supervised Learning We now formalize our notion of consistent classification and introduce the settings for consistent Semi-Supervised Learning (consistent-SSL). ## 2.1 Consistent Classification In order to formalize consistent classification, let X be our set of items, and Y = [c] for c ∈ N, be a final set of labels. Each item x ∈ X corresponds to a label y ∈ Y. Additionally, let V : *X → X* , be a nondeterministic perturbation function which transforms an item from one version x to another xˆ. For example, if x = "blue T-shirt small size", xˆ ∼ V(x) could be xˆ = "black T-shirt small size" or xˆ = "blue T-shirt large size". We assume that the perturbation function is label-preserving, i.e. x, xˆ ∼ V(x) share the same label y. Let p(*x, y*) be a joint distribution over items and labels and p(x) the marginal distribution over items. The goal of consistent classification is to learn a classifier f : *X → Y* from a class F with a dual objective: a high expected **accuracy**, i.e. high expected value of the indicator that an item x ∈ X is labeled by f to its correct label y: $$\operatorname*{E}_{(x,y)\sim p(x,y)}[1\;\{f(x)=y\}]$$ .) $\frac{1}{2}$ . and a high expected **consistency**, which we define as: $$\operatorname*{E}_{\stackrel{x\sim p(x),}{\hat{x}}\sim\mathcal{V}(x)}[1\left\{f(x)=f({\hat{x}})\right\}]\qquad\qquad(2)$$ i.e. the expected value of the indicator of two items x, xˆ ∼ V(x) to be transformed by f to the same label. Therefore, the dual objective of f can be formalized as: $$\operatorname*{min}_{f}\operatorname*{E}_{(x,y)\sim p(x,y),}[1\left\{f(x)\neq y\right\}+\lambda1\left\{f(x)\neq f({\hat{x}})\right\}]$$ $${\hat{x}}{\sim}\mathcal{V}(x)\eqno(3)$$ where λ ∈ R controlling the balance between the accuracy loss and the consistency loss. Note that there could be a trad-off between the accuracy objective and the consistency one. A model that is trained to disregard specific features like color or size would be more consistent but as those features might be informative in order to partition between some categories this could harm the overall accuracy. For example, if the color purple is more likely to appear in sport shoes than in evening shoes, a model that is trained to give less weight to colors may have a harder time distinguishing between sports and evening shoes while being more robust to changes in colors and thus more consistent. ## 2.2 Consistent-Ssl Settings In SSL settings, we are given labeled data DL = {(xi, yi)} l i=1, which is assumed to be sampled i.i.d. from p, and unlabeled data DU = {xi} l+u i=l+1 possibly sampled from another distribution q. We tune a classifier f using both DL and DU . This work extends the standard SSL settings to consistent-SSL. The unlabeled data DU is clustered with respect to the perturbation function V, i.e. it consists of u sets of items Xi, each set contains ki versions xˆ (i) j ∼ V(xi) of the same item xi. 314 More formally, $\mathcal{D}_U=\{X_i\}_{i=l+1}^{l+u}$, where, $X_i=\left\{\hat{x}_j^{(i)}\right\}_{j=1}^{k_i}$, and $\hat{x}_j^{(i)}\sim\mathcal{V}(x_i)$ for $j=1\ldots k_i$. The goal in consistent-SSL is to learn a classifier f that optimizes the objective in Eq. (3) given DL and DU . Note that V is unknown, and only appears indirectly in the DU samples. ## 3 Methods We present two methods for consistent-SSL, Consistent Self Training (CST) and Consistent Generative Augmentation (CGA). Both methods utilize the unlabeled samples from DU for data augmentation. In each method we create an augmented set Daug using DU and train a classifier f on DL∪Daug. This approach optimizes indirectly the objective of Eq. (3), as we add additional training samples Daug that consists of different versions of the same items. The goal is to expose f to a more diverse set of item versions in training time, making it more robust to minor changes. Let us review our approach using an illustrative example. Consider a dataset that contains clothing items. Assuming that DL, which was sampled from the distribution p, exhibits a spurious correlation between color of an item to its category (e.g. most of the black items are coats and most of the red items are dresses), then a classifier that was trained solely on DL will tend to rely on the color of the item when it predicts its category. When applying the model, V could change the items' colors and therefore the classifier will not be consistent (e.g. if V transforms a black coat to a red one, the classifier might predict different categories). But, assuming the training data includes an item in multiple colors (e.g. black coat, red coat, blue coat, etc.), with the same label (e.g. Coats & *Jackets*), then a model that is trained on such data will not relate a specific color to a specific label. Such a model will be encouraged to ignore the color of an item when it predicts the label, and therefore will be more robust to changes in color. Note that colors here are only an example of one kind of versioning of items. Spurious features in the data could be related to colors, measurements, models, materials etc. ## 3.1 Consistent Self Training (Cst) In our first method, named Consistent Self Training (CST), we add samples from DU to the labeled training data DL and a new classifier f is trained on the unified dataset. Since the data of DU is unlabeled, we perform a variant of self training ![2_image_0.png](2_image_0.png) (Lee et al., 2013; Arazo et al., 2020; Triguero et al., 2015). To make sure that Daug is consistent, it's important that each item set Xiis assigned with the same pseudo-label y˜i. To calculate y˜i, we first train a base model f base on the labeled data DL and then use it to choose a single pseudo-label for each example set Xi, i.e. y˜i ← h(Xi; f base), where h is a function that given a set of examples and a classifier f base returns a single label. For example, h could return the prediction of f base that got the highest confidence score, or the most frequent prediction across Xi. The function h is an hyperparameter of the method. Finally, a classifier f is trained over DL ∪ Daug. Figure 1 shows an illustration of the CST pipeline, and a full description of the algorithm is presented in Appendix A. ## 3.2 Consistent Generative Augmentation (Cga) We now detail our second method, we refer to as Consistent Generative Augmentation (CGA). Here, we train a generative model M on DU in order to learn the perturbation function V, and we use it to generate new samples based on the instances of DL. For this end, an item-pair dataset of different versions of items, Dpairs is constructed from DU ; Dpairs = nxˆ (i) j, xˆ (i) j′ l + 1 ≤ i ≤ l + uV*j, j*′ ∈ [ki] o. We train M on Dpairs to generate the second item given the first of each pair, while maintaining its label. Note that xˆ (i) j′ ∼ V(ˆx (i) j). Next, we generate an augmentation set Daug using DL by applying M on each (x, y) ∈ DL to get a new labeled sample (ˆ*x, y*). Note that we can use M to generate multiple new samples from a single sample x. After creating Daug, we filter it using a score function s : *X × X →* [0, 1] that aims to measure the quality of the generated xˆ with respect to its origin x. Additionally, we remove low quality samples from Daug according to some predefined filter threshold T. Finally, we train a classifier f over DL ∪ Daug. Both s and T are hyper-parameter 315 of the CGA method. Figure 2 shows an illustration ![3_image_0.png](3_image_0.png) of the CGA pipeline, and a full description of the algorithm is presented in Appendix B. ## 3.3 Methods Comparison We compare the two proposed methods by three main aspects: the quality of the augmented product titles, the quality of the labels and the overall distribution. Considering the quality of the product titles, the CST method utilizes the unlabeled clustered data itself and thus provides product titles that are sampled from the real world and captures information about the true perturbation function V. In contrast, the CGA method uses generated product titles, which may not represent V accurately. Regarding the label quality, the CGA method utilizes labels that are taken directly from the ground truth labels of the original items and thus of a better quality than the ones of the CST method, which uses calculated "pseudo-labels". With respect to the distribution of the data, the generated samples in the CGA method are taken directly from the distribution p of the labeled training set. In contrast, in CST the unlabeled data comes from a distribution q that is different than p, thus biasing the overall distribution of the training set. The quality of the product titles in the augmentation set impacts the consistency and corollary the overall optimization of the model f. On the other hand, both the quality of the labels and the distribution of the augmentation set influence the accuracy which again affects the overall optimization of f. ## 4 Empirical Evaluation We now present our experimental results. We note that in all of our experiments, we use a model that is based on FastText (Joulin et al., 2016) architecture, and has an hierarchical structure. This specific model is found to perform well on our task, as it takes into account the hierarchical structure nature of the labels. For more details, see Appendix C. ## 4.1 Train And Test Data We conduct experiments using an e-commerce text classification dataset in order to empirically evaluate our methods. The items in this dataset are titles of commercial products, represented as free text, and the labels are the items' categories. The labels are taken from a hierarchic products taxonomy with 4 levels of granularity {Li} 4 i=1. For example, consider a product title such as "Greenies Breath Buster Bites Fresh Flavor Grain-Free Dental Dog Treats, 1.2-oz bag", and its corresponding category Animals & Pet Supplies > Pet Supplies > Dog Supplies > *Dog Treats*. Our dataset contains 184k labeled samples with 3k different labels, and additional 1.3M unlabeled samples. The labeled samples correspond to real-world commerce related items, and are labeled by human annotators. The unlabeled samples are retrieved from a product catalog of multiple retailers that includes grouping information. Each group contains multiple versions of the same item, e.g. "L.A. Girl, Matte Lipstick, **Snuggle***, 0.10 oz"* and "L.A. Girl, Matte Lipstick, **Bite Me***, 0.10 oz"*. There are 363k different groups in the unlabeled catalog data, each group contains 2 to 192 items, and the average group size is 3.6. We note that the labeled and unlabeled data sets originate from different sources. This results in different category representation between the labeled and unlabeled data, e.g. several categories in the unlabeled data have low coverage compared to the labeled one. Our experiments measure both accuracy and consistency of the tested models. To this end, we create two different test sets: Accuracy test. The accuracy test is a standard test set that consists of labeled samples, on which we compute the weighted average F1 score of a given model. The accuracy test contains 23k labeled examples sampled uniformly at random from the labeled data. We use the remaining 161k labeled samples as the DL. Consistency test. The consistency test consist of pairs of item titles (ˆx 1, xˆ 2), each pair includes two different versions of the same item. We define the *consistency rate* of a given model f to be the percentage of the (ˆx 1, xˆ 2) pairs from the consistency test that receive the same label prediction by f, i.e. f(ˆx 1) = f(ˆx 2). We create this test set by (a) distribution of DL (b) distribution of **complete** DU (c) distribution of **sub sampled** DU ![4_image_0.png](4_image_0.png) Figure 3: Distributions of the different versions of the data for CST. The labels are presented in L1 granularity. ![4_image_1.png](4_image_1.png) sampling 9k groups from the unlabeled data, then by sampling one pair of different titles (ˆx 1, xˆ 2) from each group. Since the consistency rate of a model on this test should be an empirical evaluation of its consistency as defined in Eq. (2), the distribution of the data in this test should be similar to the distribution of the data in the accuracy test. To mitigate some of the discrepancy between the unlabeled and labeled datasets, we sub-sample the unlabeled dataset according to the L1 distribution of the labeled set. We use the unlabeled samples that are not selected for the consistency test as DU for training. ## 4.2 Experimental Framework This subsection describes in detail the configuration of the proposed methods, and the baselines that were used for comparison. ## 4.2.1 Baselines For the first **Baseline** model, we use the existing product categorization model, trained using only DL. The second baseline is a **ColorsSizes-Blind** (CS-Blind) model. We train it using DL alone, while omitting colors and measurements from the data. We use predefined dictionaries of colors (e.g. "red", "white") and measurements (e.g. "small", "XL") to identify appearances in item titles and replace them with constant tokens, one for colors and another for sizes. This baseline simulates an attempt to tackle the consistency issue by manually identifying few spurious features in the data and hiding them from the model to make it consistent. ## 4.2.2 Cst We evaluate CST with two configurations, each utilizes a different version of DU : 1) the **complete** data (354k groups with 1.3M samples), and 2) **subsampled (SS)** data, sampled to be as similar as possible to DL's histogram (yielding 70k groups with 250k samples). Fig. 3 provides an illustrations of those histograms. In order to assign each group of items with one single label, as described earlier, we choose the category with the highest confidence score within the group provided by f base 1. ## 4.2.3 Cga In order to empirically evaluate CGA, we construct Dpairs from DU as described earlier and use a T5 model (Raffel et al., 2020) (a large Transformer based seq-2-seq model) as M, which we fine-tune on Dpairs for three epochs. The impact of the filtering score function. We examine two alternatives of the score function s; 1) BLEU score (Papineni et al., 2002) and 2) a cosinesimilarity score that was computed on the output vectors of an all-MinmLM-L6-V2 model (AllMinmLM-L6-V2). This model maps sentences to 1Preliminary experiments showed that this method outperformed majority voting. | Original Product Title | Generated Product Title | BLEU score | |-------------------------------------------|----------------------------------------------|--------------| | Polo Ralph Lauren Big Boys Fleece Hoodie | Polo Ralph Lauren Little Boys Fleece Hoodie | 0.795 | | Puff Sleeve T Shirt Ivory Frost | T Shirt | 0.135 | | Blackberries Prepacked 6 Oz | Cranberry Prepacked 6 Oz | 0.724 | | Sunnies Face Airblush in Peached | Sunnies Face Airblush in Peached Wall Poster | 0.482 | | With Pushpins | | | | Artistry Signature Color Long-wearing Eye | Artistry Signature Color Long-wearing Eye | 0.850 | | Pencil Brown | Pencil Black | | Table 1: Examples of pairs of original product titles and their corresponding generated ones, together with the computed BLEU score of the pairs. a 384 dimensional dense vector space and can be used for tasks such as clustering or semantic search. We compute both scores for each pair of original product title and a corresponding generated title. Preliminary experiments show that filtering by the BLEU score results in a more consistent model. For the rest of the experiments we use the BLEU score as s. Table 1 contains some examples of generated titles and their corresponding BLEU score. Using the T5 model, we generate 8 samples based on each sample from DL, and compute the s score of each of those samples. We then perform three experiments to evaluate the impact of the filtering threshold T and the augmentation size N. Results are presented in Figure 4. The impact of the filtering threshold. For each threshold value T ∈ {0.5, 0.6, 0.7, 0.8}, we filter the generated samples. Then, we sub-sample a fixed amount of N = 200k samples into Daug and train a model on DL ∪ Daug. As T gets higher, the consistency rate of the trained model increases as well, which indicates the need of a filtering phase. The impact of the augmentation size. We filter the generated samples using a fixed T = 0.7. Out of the remaining generated samples, we sub-sample N ∈ {50k, 100k, 200k, 400k} samples into Daug, and train a model on DL ∪ Daug. As N gets higher, the consistency rate of the trained model increases as well, which indicates that adding more generated samples leads to a more consistent model. ## The Trade Off Between Filtering Threshold And augmentation size. We filter the generated samples using different thresholds, and add the filtered samples to Daug without sub-sampling them. We train a model on DL ∪ Daug. Evidently, the consistency rate of the trained model increases when T gets higher but decreases for T = 0.8. As T gets higher, the filtered samples are of better quality but there are fewer of them, reaching an optimal trade off at T = 0.7. Thus, for the rest of the paper, we use T = 0.7. ## 5 Results And Discussion We train each examined model 5 times and present the mean score of the achieved results. For each model, we compare the weighted average F1 score for the accuracy test and the consistency rate of the consistency test. Table 2 presents our results. The ColorsSizes-Blind model performs similarly to the baseline for both measurements; the slight changes are within the std range, thus making the differences insignificant compared to the baseline model. This is an evidence that the item versioning is more complex than just changing the size or color and includes title rephrasing concepts that are hard to tackle in a trivial way. In addition, the results show that both of the CST versions, complete and sub-sampled, achieve significantly higher consistency rates than the baseline, gaining lifts of 10% and 7% respectively. On the other hand, both of the methods yield lower F1 scores, reducing lift by 1.65% and 0.6% respectively. A possible cause of the degradation in the F1 score is the differences between the data distribution of DL, which we sample the accuracy test from, and the data distribution of DU which we use to augment our training data. The fact that using the sub-sampled version of DU mitigates most of this degradation supports this claim. An additional cause could be the usage of the noisy pseudo-labels in the augmented set instead of the unavailable ground truth labels. Note that the amount of added data using DU to tackle consistency is bigger than the original DL, which aims to tackle accuracy. The focus in terms of the training shifts from an accuracy problem to a consistency problem, thus | Method | F1 | F1 lift | Cnst. | Cnst. | |----------|-------|-----------|---------|---------| | rate | lift | | | | | Baseline | 0.665 | - | 0.738 | - | | CS-Blind | 0.664 | -0.13% | 0.740 | 0.26% | | CST-Full | 0.654 | -1.65% | 0.813 | 10.12% | | CST-SS | 0.661 | -0.6% | 0.790 | 6.99% | | CGA | 0.667 | 0.28% | 0.771 | 4.46% | hurting the F1 of the new model. The higher consistency rate of CST-Full compared to the CST-SubSampled can be explained by a difference of more than 1M samples in the size of Daug. Similarly, the CGA method also improves the consistency rate, gaining lift of 4.5%, and doesn't significantly affect the accuracy score. As mentioned, we use a threshold T = 0.7, thus including 440k samples in Daug. These additional samples correspond to a similar distribution as DL.The improvement in both the consistency and the accuracy indicates that the generative model is able to correctly learn the real-world item versioning and produce a significant amount of data with high accuracy labels and the same distribution as in the accuracy test. Summarizing the above, our experiments highlight three key factors in the consistent-SSL framework: 1) **Scale** - enriching the learning set with more examples of item versioning increases the consistency. 2) **Quality** - augmenting the data with real-world samples is better than using generative ones in term of performance. 3) **Distribution** - preserving the original distribution in the augmented set is important for maintaining good accuracy. ## 6 Conclusions This work presents a new framework for consistent text categorization in the context of e-Commerce. The aim of this work is to improve a product categorization model that serves various services of a major web company. We address the labeling inconsistency issues found in the categorization of similar items, leading to poor user experience in related recommendation and search applications. Our framework utilizes an unlabeled clustered dataset in two ways: a self-training approach and a generativeaugmentation method. We performed a thorough investigation of the two approaches and investigated several factors that majorly influence their performance. Our experimental results suggest that both proposed methods improve the consistency rate by 4% to 10%, while maintaining the accuracy of the current production model. Finally, our study illustrates the trade off between the quality and the scale of the augmented dataset, and its impact on the performance of both methods. ## Limitations Our work has several limitations. First, our consistency study focuses on our used categorization model and was conducted on only one specific dataset. It might not perfectly generalize to other problems. Second, the proposed solutions are based solely on data augmentation without changing the current production settings and model. Other approaches such as changing the model's objective function to take consistency into account might also benefit the solution. Lastly, in terms of user perspective, while our solution show significant improvement over the baseline, inconsistencies are still visible. ## Ethics Statement This NLP research study was designed and carried out with strict adherence to ethical principles and guidelines. The study was reviewed and approved by our company's research lead prior to the submission. The study followed the ACL conference's guidelines on the use of language data. The researchers take full responsibility for ensuring the ethical conduct of this study and are committed to upholding the highest standards of ethical research practices in NLP. ## References All-MinmLM-L6-V2. 2022. All-minmlml6-v2. https://huggingface.co/ sentence-transformers/all-MiniLM-L6-v2. [Online; accessed 10-October-2022]. Eric Arazo, Diego Ortego, Paul Albert, Noel E O'Connor, and Kevin McGuinness. 2020. Pseudolabeling and confirmation bias in deep semisupervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893. M Saiful Bari, Tasnim Mohiuddin, and Shafiq Joty. 2020. Uxla: A robust unsupervised data augmentation framework for zero-resource cross-lingual nlp. arXiv preprint arXiv:2004.13240. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. *Transactions of the Association for Computational Linguistics*, 9:1012–1031. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Proceedings of the AAAI conference on artificial intelligence, 34(05):8018–8025. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. *arXiv preprint arXiv:1607.01759*. Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. *arXiv* preprint arXiv:1909.12434. Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained transformer models. *arXiv preprint arXiv:2003.02245*. Dong-Hyun Lee et al. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. *Workshop on challenges in* representation learning, ICML, 3(2):896. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In *Proceedings of the* 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67. Georgios Rizos, Konstantin Hemker, and Björn Schuller. 2019. Augment to prevent: short-text data augmentation in deep learning for hate-speech classification. In *Proceedings of the 28th ACM international conference on information and knowledge management*, pages 991–1000. Connor Shorten, Taghi M Khoshgoftaar, and Borko Furht. 2021. Text data augmentation for deep learning. *Journal of big Data*, 8(1):1–34. Isaac Triguero, Salvador Garcia, and Francisco Herrera. 2015. Self-labeled techniques for semi-supervised learning: taxonomy, software and empirical study. Knowledge and Information systems, 42(2):245–284. Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. 2021. Counterfactual invariance to spurious correlations: Why and how to pass stress tests. *arXiv preprint arXiv:2106.00545*. Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, and Ed Chi. 2020. Catgen: Improving robustness in nlp models via controlled adversarial text generation. arXiv preprint arXiv:2010.02338. Tianlu Wang, Diyi Yang, and Xuezhi Wang. 2021. Identifying and mitigating spurious correlations for improving robustness in nlp models. *arXiv preprint* arXiv:2110.07736. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33:6256–6268. ## A Cst Algorithm Algorithm 1 Cst Input: labeled training data DL = {(xi, yi)} l i=1, unlabeled data DU = {Xi} l+u i=l+1, Xi = nxˆ (i) j oki j=1 , set function h 1: train a base classifier f base on DL 2: Daug ← ∅ 3: for i = l + 1, l + 2*, . . . , l* + u do 4: y˜i ← h(Xi; f base) 5: Daug ← Daug ∪ n(ˆx (i) j, y˜i) oki j=1 6: train f on DL ∪ Daug 7: **return** f otherwise it returns the prediction of fi−1. If the process gets to the end, i.e. f4 agrees with f3 on the label until L3, it returns the prediction of f4 as the final prediction. ## B Cga Algorithm Algorithm 2 Cga Input: labeled training data DL = {(xi, yi)} l i=1, unlabeled data DU = {Xˆi} l+u i=l+1, Xi = nxˆ (i) j oki j=1 , number of samples to generate from each original sample n, score function s, threshold T 1: Dpairs = nxˆ (i) j, xˆ (i) j′ l + 1 ≤ i ≤ l + uV*j, j*′ ∈ [ki] o 2: train a generative model M on Dpairs 3: Daug ← ∅ 4: for i = 1, 2*, . . . , l* do 5: generate n new samples xˆ (i) 1 , . . . , xˆ (i) n with M and xi 6: for j = 1, 2*, . . . , n* do 7: if s(ˆx (i) j, xi) ≥ T 8: **then** Daug ← Daug ∪ n(ˆx (i) j, yi) o 9: train f on DL ∪ Daug 10: **return** f ## C Hierarchical-Fasttext Hierarchical-FastText (HFT) consist of 4 FastText models {fi} 4 i=1. In training time, each fiis trained over the same data samples, but with different granularity of the labels: f1 is trained using only the first level of the labels L1, f2 is trained using the first and second levels of the labels L1 and L2 and so on. In inference time, we use an iterative method, were at each iteration i for i = 1*, . . . ,* 4 we predict the label using fi. If fi agrees with fi−1 on the label until the level Li−1, the process continues,
bhathena-etal-2023-efficient
An efficient method for Natural Language Querying on Structured Data
https://aclanthology.org/2023.acl-industry.31
We present an efficient and reliable approach to Natural Language Querying (NLQ) on databases (DB) which is not based on text-to-SQL type semantic parsing. Our approach simplifies the NLQ on structured data problem to the following {``}bread and butter{''} NLP tasks: (a) Domain classification, for choosing which DB table to query, whether the question is out-of-scope (b) Multi-head slot/entity extraction (SE) to extract the field criteria and other attributes such as its role (filter, sort etc) from the raw text and (c) Slot value disambiguation (SVD) to resolve/normalize raw spans from SE to format suitable to query a DB. This is a general purpose, DB language agnostic approach and the output can be used to query any DB and return results to the user. Also each of these tasks is extremely well studied, mature, easier to collect data for and enables better error analysis by tracing problems to specific components when something goes wrong.
# An Efficient Method For Natural Language Querying On Structured Data Hanoz Bhathena¶, Aviral Joshi1¶**, Prateek Singh**1¶ ¶Machine Learning Center of Excellence, JPMorgan Chase & Co. {hanoz.bhathena,aviral.joshi,prateek.x.singh}@jpmchase.com ## Abstract We present an efficient and reliable approach to Natural Language Querying (NLQ) on databases (DB) which is not based on text-toSQL type semantic parsing. Our approach simplifies the NLQ on structured data problem to the following "bread and butter" NLP tasks: (a) Domain classification, for choosing which DB table to query, whether the question is out-ofscope (b) Multi-head slot/entity extraction (SE) to extract the field criteria and other attributes such as its role (filter, sort etc) from the raw text and (c) Slot value disambiguation (SVD) to resolve/normalize raw spans from SE to format suitable to query a DB. This is a general purpose, DB language agnostic approach and the output can be used to query any DB and return results to the user. Also each of these tasks is extremely well studied, mature, easier to collect data for and enables better error analysis by tracing problems to specific components when something goes wrong. ## 1 Introduction With the recent revolution in information retrieval and question answering, powered by deep learning models, asking queries in a more natural question format e.g. who scored the most points in the NBA back in 2018? have become commonplace instead of keyword based searches. More recently models like ChatGPT 1 directly generate responses instead of just highlighting text in webpages. A large majority of work in large scale QA systems has been on documentQA (Chen et al., 2017; Karpukhin et al., 2020), wherein both the query and the retrieval unit is of text modality. Here, both query and documents are generally embedded into a vector representation (generally in the same D-dimensional space) and fast maximum inner product search is used to retrieve the top documents. Similar approaches have been used for image search (Dubey, 2021). However, when the information to be retrieved is in structured form i.e. table or group of tables in a database; dual embedding approaches are less common. Here, semantic parsing i.e. translating the natural language query into a formal meaning representation e.g. SQL are more common. This has inspired several text-to-SQL approaches Zhong et al. (2017); Yu et al. (2018); Finegan-Dollak et al. (2018); Iyer et al. (2017). Yu et al. (2018) introduced *Spider*, a large-scale complex and crossdomain semantic parsing and text-to-SQL dataset. Models are evaluated on (clause level) exact match between the gold SQL query and the generated one, and the execution accuracy. However, building such models have many practical constraints, perhaps the most important one being collection of domain specific annotated data. Annotators not only need to be well versed in the query language but also have detailed knowledge of the comprehensive database schema, table structure etc. It is one thing to know a unique list of all the tables and/or columns in a database, but it is even more demanding to remember which schema or table they belong to and have to refer to this everytime to manually write an output query (even with templates provided). Additionally, if the schema/table structure changes with columns added/dropped/moved from one table, then some queries might become invalid. Furthermore, the models themselves can suffer from some of the common issues associated with auto-regressive generative models such as repetition, hallucination etc. Although these can to some extent be mitigated via decoding constraints, the process is still cumbersome and the gains in most practical use cases might not be worth it. Finally, if the DB type is changed and uses some other query language then annotating newer data might require re-training the annotation team and even existing queries would need to be translated into the second language, which might be difficult for certain language pairs. An alternative end-to-end approach first introduced in Herzig et al. (2020) has also been explored in the literature where the question and flattened table are jointly embedded into a transformer model and trained using masked language modeling (MLM) to table cells. While this approach can work well on smaller tables and documents with text and embedded tables, it would be difficult to scale to large tables and have even bigger controllable generation issues than semantic parsing approaches. In this paper, our primary contribution is proposing a simple yet powerful and configurable framework for a reliable question answering system on structured data by converting the multi-domain NLQ on DB problem to (1) domain prediction (2) multihead slot tagging (3) raw slot value resolution or disambiguation and (4) deterministic algorithm to convert the outputs of the above three into a query for given database. For (1) and (2) we use elemental NLP models for text classification and token classification (like NER), respectively. For slot value resolution we propose a suite of methods depending on the data type of the slot extracted and intrinsic nature of the problem (lexical vs semantic similarity; contextual vs non-contextual resolutions). This approach allows us to annotate data much faster as annotators only need to know the names of the tables and unique set of columns across all tables. Furthermore, we can also individually test, improve and troubleshoot potential defects to particular components of the pipeline as required; something critical in real world production systems. We posit that our approach can be applicable to a large majority of business use cases where the natural language queries being asked do not need overly complex sub-querying and joins. Our method is capable of scaling to when the number of total *unique* columns (which is equivalent to slot types) is of the order of tens to low hundreds and same for number of tables/domains. In terms of queries we are able to theoretically support selection, filtering, sorting and aggregation and joins (in a limited capacity as described below). ## 2 Methodology 2.1 Problem Setting Our goal is to retrieve data from a structured database using natural language questions. Given a user question X and a database with tables T and set of all unique columns C, our framework must be capable of converting it to a database query Q. There must be no dependence on the type of DB (SQL or NoSQL) and we must support select, filter, sort and aggregate clauses and do simple joins. Furthermore, our framework must be reliable enough to apply in a real world commercial setting; scalable to be trained on large quantities of annotated data; easy to gather annotated data without annotators needing to know the query language or full schema details; and engineers and scientists who build the E2E system must be able to troubleshoot issues quickly. Finally, while adding unique new tables and columns can necessitate need for re-training, editing the schema via dropping, renaming or moving tables or columns should require no re-training of the NLU model components (this can be handled in the query formulation function). In the rest of this paper we demonstrate our approach via an example use case, which is currently deployed in production, of querying two customer transactions tables with eight unique columns. For our particular use case only filtering of data is needed and joins are not needed. However, we will also explain how the framework can be generalized and adapted to other settings. ## 2.2 Slot Domain Model The first two steps of our four step framework consists of a domain classification and slot extraction model, which corresponds to (multi-class or multi-label) text classification and multi-head token classification task, respectively. Depending on the use case these can be separate models or they can be done together using multi-task learning (MTL). There is typically positive transfer between these tasks and so unless there is good reason otherwise, we propose using a model trained with multi-task learning (MTL) for these tasks. Functionally, the domain model would be used to (a) select the appropriate table/collection relevant to the user query and (b) detect when a given query is unanswerable from the available structured data. Having this module is advisable as even in the trivial case of just one table, deciding whether the query is answerable or not is practically crucial in a real world setting where users are free to type in anything. For joining multiple tables, the simpler multi-class text classification problem would become a multi-label classification problem with two heads: one for the table name and another for the type of join (inner, outer, full, or none), with the ![2_image_0.png](2_image_0.png) join done on common column names. The next component is multi-head slot/entity extraction which is framed as a multi head slot tagging task which must predict the *type, role and* function of a given span. The first head which we call the *type* head, predicts a label that corresponds to the column/field name. The second head predicts the *role* of that slot span e.g. filter, sort_asc, sort_desc, aggregation, selection. The *function* head predicts the aggregation function i.e. count, sum, avg etc. The slot type and aggregation function (which can be None for no aggregation) is always multi-class for a given span, but the role is multi-label as the same column can appear in multiple roles e.g. filter and sort or in select and group by. For example, in the query "what were my purchases at Amazon in the last month"; *purchases, Amazon* and *last month* are all slots with role **filter** only. However, for "what were my *largest* purchases at Amazon in the last month", *Amazon* and *last month* are still only role of **filter**, while *purchases* has both **filter** and **sort_desc**. ## 2.2.1 Model Architecture For our purposes we utilize the DIET model proposed by Bunk et al. (2020) whose implementation is available in the Rasa open source library 2. Each head for the domain and slot prediction has a loss value associated with it. Softmax cross entropy loss is used for multi-class heads while sigmoid loss is used for multi-label heads. The total training loss is the sum of all of the individual head losses; with differentiated weighting possible if one or more heads requires it. We use balanced mini-batching to handle class imbalance and utilize data augmentation to mitigate most underrepresented classes. In our use case we use both sparse and dense feature inputs to DIET. Sparse features include word and character n-gram counts while the dense features are sentence level ([CLS]) and token level features from a BERT (Devlin et al., 2018) pretrained language model, with in-domain masked language modeling (MLM) pre-training. Since we use sub-word tokenization, we employ a general purpose custom token to span level label aggregation and conflict resolution also. 2https://github.com/RasaHQ/rasa Furthermore, since we utilize MTL for our use case, it is possible that the slot type i.e. column and domain i.e. table might not match as our model does not have direct knowledge of the schema. One way to solve for this would be adding heuristic rules in the query formulation step to handle conflicts, say based on confidence, which is what we use. Another method is not to use a joint model, rather first predict the domain (which can map to a table) and then train separate slot models for each. ## 2.3 Slot Value Disambiguation Slot values extracted from the tagging model are not fit to be used directly for querying a database. A user might enter "last week" which needs to be resolved to an actual date range. They can enter free form text with spelling mistakes or abbreviations such as "amzn" instead of "Amazon.com, Inc"; or in more complex cases use totally different values than enumerated names or codes in the DB e.g. say "coffee" whereas DB only has a category called "Beverages" or say "hotel" while the DB has value of "Hospitality". Therefore, the free form slot value from a user input must be resolved to a compatible value (generally from some knowledge base) before querying the DB. This task is commonly called named entity disambiguation (NED) or entity linking when we are in the context of Named Entity Recognition (NER). However, since in our case this is not only needed for named entities but broader types and values e.g. dates, numbers, amounts, expense categories etc we call this more generic step Slot Value Disambiguation (SVD). Our solution for SVD utilizes the prediction of the type head of the upstream slot tagger to determine the type of disambiguation treatment applied to the slot value. At a high level, we categorize the type of slot values into four distinct categories. ## 2.3.1 Numeric Slot Values Queried With Strict Equality Only Numeric slots like account numbers, phone numbers, SSNs etc. which generally have distinct values which require an exact (or partial e.g. last 4 digits of credit card) lookup into the database, but are never queried as ranges. For example, phone numbers are not generally queried as a range and hence such slot types will be used with minimal post processing for lookup against the database, such as removing dashes, commas or other nonnumeric characters as needed. ## 2.3.2 Numeric Slot Values Queried With Equality Or Ranges Other numeric values e.g. date, amounts, percentages, area etc are potentially queryable using equality (e.g. "6/1") or ranges (e.g. "6/1-9/2") and they also need to be normalized e.g. a dollar amount could be represented as "$2,000" or "2000 dollars" or "two thousand" etc. To normalize above into a standard format we utilize Duckling 3, 4. Duckling is an open-source probabilistic parser to detect slots like dates, times, amounts and durations. It then resolves these to standard values using rule based methods. We found the entity extraction quality of Duckling quite inferior to our in-domain trained DIET model. Therefore, unlike typical usage of duckling for both slot/entity extraction and disambiguation, we use it only for disambiguation and provide the type of slot derived from the upstream tagger (DIET). To handle potentially conflicting DIET and Duckling types we supplement our SVD with a set of curated rules, see Table 4 for full details. Additionally, our solution supplements Duckling's rules by accounting for many more variations which are seen in natural language utterances, see appendix A.1 for more details. ## 2.3.3 Non-Contextual Textual Svd For remaining types of slot values, SVD involves mapping the raw value in the slot to one from a provided knowledge base (KB). The mapping can be non-contextual (takes only slot value span as input) or contextual (needs the entire utterance context for the mapping). For non-contextual SVD, our approach relies on the comparing the similarity between the raw span extracted from tagger against a finite set 5 of potential resolution candidates to determine the final normalized value which is used for querying the database. We create a Resolution to Candidates (R2C) mapping from the knowledge base containing final enumerated resolutions and a corresponding list of candidates. The raw text span from the tagger is compared against the candidate list and the top-N candidates are selected which are then mapped back to the final resolution using the inverse R2C mapping. The candidates are chosen via a semi-automated approach. If available a subject matter expert can provide a seed list of candidates, 3https://github.com/treble-ai/pyduckling 4https://github.com/facebook/duckling 5finite but need not be static i.e. the list of final resolutions can change without necessarily needing re-training ![4_image_0.png](4_image_0.png) however this is completely optional and might not be conducive in larger scale settings which is why we have a multi-stage automated approach to generate additional candidates as follows: - Most frequent span values annotated to a particular resolution from training utterances are added as candidates to the R2C mapping. - Sometimes, the same raw span might be mapped to more than one resolution. If we are in a multi-label setting, this is OK. However, for the multi-class case we tie break by choosing the resolution for which a particular span was most often assigned to by human annotators. - We also generate synonyms for selected candidates by augmenting them with character perturbations and also derive candidate phrase synonyms by choosing top N most similar phrases from their phrase vector representations. Which phrase vector representation is a design choice; but we use Trask et al. (2015) in our experiments and its associated library 6 We use lexical and semantic similarity between slot values and candidates from the R2C mapping. More technical details on these techniques are provided in appendix A.2. ## 2.3.4 Contextual Textual Svd For certain types of slot disambiguation, the normalized value might be different according to the context e.g. "apple" in "dining table purchased at the apple store" should resolve to "Big Apple Antiques", a furniture store and not "Apple Inc". Here, 6https://github.com/explosion/sense2vec the context is essential for the slot value disambiguation as only the word "apple" is not enough. Finally, contextual and non-contextual SVD methods can be ensembled using either unconditional (use all models) or conditional (only trigger other models if first one is less confident) as described in appendix A.3. ## 2.4 Query Formulation Given the table name and resolved slot values, types and roles; we can write a deterministic function to generate the query. The filtering clause can get slightly more complicated because filters can be connected by AND, OR conditions and can have ranges or equality conditions. We posit that AND conditions are generally be among inter-column filters and OR for intra-column filtering e.g. "how many stocks of Apple and Amazon did I purchase last week". Even though the user says "Apple and Amazon", the intent is to filter on both companies, the (SQL) query actually would be something like *where (company_name="Apple Inc" OR company_name=Amazon.com Inc) AND (date between* <week_start> AND <week_end>). Additionally, notice how we separated filter conditions on equality from ranges and this (also needed for SVD) must be done by defining the type of the given column. ## 3 Experimental Details 3.1 Dataset Statistics for the domain and slot/entity types on our internal dataset are provided in Table 1. We aim to support searches on two large database tables in the retail and investment space. We add a third domain *other* which serves as a background class that bypasses the remaining pipeline for unsupported queries. There are eight slot types in our dataset. ## 3.2 Data Augmentation For Domain Slot Model In order to mitigate high imbalance and improve robustness we used various data augmentations. - Backtranslation: Generated semantically similar utterance reformulations by translating an utterance to another language and then translating it back to English. We then annotated these new utterances for missing slot tags using combination of exact match and certain user defined transformation functions for | Domains | W/o Augmentation | Augmented | |----------------|--------------------|-------------| | retail | 9871 | 67548 | | investments | 541 | 1807 | | other | 3150 | 15763 | | Slots | W/o Augmentation | Augmented | | acc num | 313 | 1909 | | amounts | 5800 | 56931 | | dates | 4096 | 22914 | | merchant | 2557 | 17147 | | prod. name | 649 | 4448 | | prod. type | 1482 | 7356 | | spend category | 1463 | 6229 | | txn type | 6564 | 35439 | | Slots | P | R | F1 | |-------------------|-------|--------|-------| | account number | 86.09 | 100.00 | 92.52 | | amounts | 98.13 | 99.25 | 98.68 | | dates | 95.36 | 95.08 | 95.19 | | merchant | 79.93 | 90.61 | 84.93 | | product name | 90.29 | 95.64 | 92.89 | | product type | 93.03 | 92.86 | 92.94 | | spending category | 84.64 | 82.75 | 83.68 | | txn type | 94.53 | 95.09 | 94.81 | | micro avg | 92.53 | 95.17 | 93.83 | | macro avg | 92.41 | 94.78 | 93.51 | | weighted avg | 92.77 | 95.17 | 93.91 | perturbed slot/entity spans e.g. "$50" would match against "50 dollars". - Paraphrase generation: Same idea as backtranslation, we used the Pegasus model (Zhang et al., 2019), fine-tuned for paraphrasing, 7to generate semantically similar utterances to a source utterance. We used similar post-processing described in back-translation. - Keyboard perturbations: Introduce character errors in words based on the proximity of character keys on the keyboard, based on a probability. - Swap perturbations: Characters within a word swapped, based on a probability. - Deletion perturbations: Deleting randomly chosen characters from word(s) in an utterance, based on a probability. - Short utterances: Generate short utterances, given longer ones using keyword models, slot span only utterances and extracting slot spans with minimally required context words around them as standalone utterances. - Math operators for amount and date ranges: Utterances with just operators along with amount and date slots e.g. <$30. ## 3.3 Setup We trained our model for 400 epochs, using a balanced mini-batching strategy with batch sizes increasing linearly from 32 to 64. We used an initial learning rater of 0.001 for our optimizer, and used 7https://huggingface.co/tuner007/pegasus_ paraphrase cross-entropy loss during training. Our domain and slot tagging DIET model had 4 transformer layers and was also trained using MLM along with domain and slot prediction. Along with this we used a dropout rate of 0.2 for the encoder and applied separate dropout to the sparse input layers but none to the attention. ## 3.4 Results In this section we present the results from the different parts of our pipeline. Table 3 reports the performance results of our approach for the components of domain classification, slot value tagging and disambiguation. The domain classification model achieves an F1 score of 89.64, with the maximum confusion occuring with the "other" class. At inference this is mitigated by ignoring "other" predictions if some slot span is predicted. Table 2 shows the performance of our approach on slot tagging. Because, dates and amount type slots are subdivided into equality, from (start of range) and to (end of range), their slot tagging results are an average of the three subtypes. Overall our pipeline is able to recognize relevant slots with avg F1 score of 93.91. Finally, our SVD results can be found in Table 3, as evident from the table, our model performs the best on product type SVD (99.27 F1 score) followed by amounts and transaction type97% accuracy and 92.58 F1 score. For dates and amounts SVD, since we do not map them to classes i.e. the ground truths are point-in-time standardized dates and amounts values, we can only calculate accuracy to measure their performance for them. ## 4 Conclusion We presented a simple, yet effective and highly configurable framework for natural language querying on structured database tables which circumvents | Task | P | R | F1 | Acc. | |----------------------|------|-------|------|--------| | Domain class. | 89.6 | 90.0 | 89.6 | | | Slot tagging | 92.8 | 95.2 | 93.9 | | | SVD (txn type) | 93.0 | 94.8 | 92.6 | | | SVD (prod. type) | 98.6 | 100.0 | 99.3 | | | SVD (prod. name) | 90.8 | 90.1 | 88.9 | | | SVD (spend category) | 93.8 | 87.5 | 89.2 | | | SVD (dates) | 88 | | | | | SVD (amounts) | 97 | | | | some of the practical constraints of generative textto-SQL approaches. While our approach might not be all-encompassing especially w.r.t. complex sub-query generation, we empirically see that these queries are often required only in limited type of applications. Furthermore, the performance of SOTA text-to-SQL approaches today is anyway quite far away from the performance expected for commercial applications and are therefore also effectively limited to simpler queries anyway. In this setting, we posit that our approach could provide a way to quickly collect labelled data and scale to multiple domains and/or database tables while also providing much more interpretability and controllability. ## Limitations As mentioned previously, the main limitation of our approach is that, very complex joins e.g. sequences of joins of different types and joining on columns which have different names in different tables is not straightforward in our approach. One extension to possibly handle this would be using a decoder to generate the complex sequence of joins and column relations. Note, however that this does not complete revert to the constrained sequence-tosequence decoding as in semantic parsing, as its not for the entire query but only the table joins or the *from* section of a SQL statement. The select, where, order by and group by can still be done via our approach and we could also continue to use MTL. The second limitation of our approach is subquerying capability which currently we do not have a strategy to handle queries which would require them. However, this is notoriously hard even for existing SOTA semantic parsing algorithms e.g. the current leader on the Spider dataset Graphix-T5-3B Li et al. (2023) achieves only 50 Exact Match (EM) accuracy on the extra hard Spider data subset and 61.5 on the Hard subset. Overall this model has a 75.6 EM. Finally, the last limitation is related to comparative evaluation. We did not benchmark our method directly against SOTA semantic parsing text-to-SQL methods on open-source datasets such as Spider. This was because to do this we would have needed to re-annotate Spider or any other dataset with domain, slot extraction and resolution labels and given the size of open source datasets this was infeasible given available annotation resources. However, we can say that on private datasets and use cases, this approach was tested against some existing text-to-SQL approaches and was very competitive especially as we could collect a lot more data for these simpler tasks and also were able to train, evaluate, troubleshoot and improve different components individually. ## Ethics Statement All the work done and discussed in this paper meets and upholds the ACL Code of Ethics. ## References Tanja Bunk, Daksh Varshneya, Vladimir Vlasov, and Alan Nichol. 2020. Diet: Lightweight language understanding for dialogue systems. arXiv preprint arXiv:2004.09936. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. *arXiv preprint arXiv:1704.00051*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Shiv Ram Dubey. 2021. A decade survey of content based image retrieval using deep learning. *IEEE* Transactions on Circuits and Systems for Video Technology, 32(5):2687–2704. Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-sql evaluation methodology. *arXiv preprint* arXiv:1806.09029. Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating large-scale inference with anisotropic vector quantization. In International Conference on Machine Learning. Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. Tapas: Weakly supervised table parsing via pre-training. *arXiv preprint arXiv:2004.02349*. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. arXiv preprint arXiv:1704.08760. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3):535–547. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick ˘ Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. *arXiv preprint* arXiv:2004.04906. Jinyang Li, Binyuan Hui, Reynold Cheng, Bowen Qin, Chenhao Ma, Nan Huo, Fei Huang, Wenyu Du, Luo Si, and Yongbin Li. 2023. Graphix-t5: Mixing pretrained transformers with graph-aware layers for textto-sql parsing. *arXiv preprint arXiv:2301.07507*. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. *arXiv preprint* arXiv:1301.3781. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. *arXiv preprint* arXiv:1909.00161. - For dates add "st", "nd", "rd", "th" etc. as applicable e.g. "1" -> "1st" to help duckling resolution. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887. - Prepend extracted amount span values without $ sign with a dollar sign, Ex. "$30 to 200" or "$30-200" can only be interpreted correctly when $ is prepended to 200. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. ## A Appendix A.2 Non-Contextual Svd Methods A.1 Supplemental Rules For Numerical Range Svd Using Duckling Since we use duckling just for disambiguation of already extracted slots and not how its typically used i.e. extraction and disambiguation with the entire utterance provided, we need to (1) resolve for certain potential conflicts between slot type from our trained DIET slot tagger and duckling's internal slot type, see Table 4 which contains one such example for date ranges; (2) help improve duckling's resolution performance using certain pre and post-processing rules as below: - Merging separate slot spans split by the slot tagger. Example: "August 2nd to 10th" are tagged by the slot tagging model as [August 2nd](date_from) to [10th](date_to). In this case duckling would fail to resolve "10th" as "10th of August". Hence we merge two otherwise independent spans to [August 2nd to 10th] from which duckling is capable of resolving the range correctly. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. - Use regex patterns to map diverse date formats e.g. "mm/ddyy", "mmdd/yy", "mmddyy", "mm/ddyyyy", "mmdd/yyyy", "mmddyyyy" to standardized "mm/dd/yyyy" to make SVD process output more consistent and reliable. - Combining multiple duckling outputs during post-processing e.g. for "May 2021", duckling does not understand it is May in the year 2021 but splits "May" and "2021" and detects it as May of the current year (say 2022) and the entire year of 2021, which is incorrect. Our custom post-processing corrects the resolution. Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings. arXiv preprint arXiv:1511.06388. ## A.2.1 Lexical Ranking This ranking approach relies on the lexical similarity between the raw slot value in the tagged span | Duckling | Slot | from_date | to_date | |------------|-----------|-------------|-----------| | value | date_eq | from_date | to_date | | value | date_from | from_date | None | | value | date_to | None | to_date | | from | date_eq | from_date | None | | from | date_from | from_date | None | | from | date_to | from_date | None | | to | date_eq | None | to_date | | to | date_from | None | to_date | | to | date_to | None | to_date | | from+to | date_eq | from_date | to_date | | from+to | date_from | from_date | to_date | | from+to | date_to | from_date | to_date | and the candidates of the slot type R2C mapping to identify the most relevant candidate. The top candidate(s) are used to lookup the inverse R2C mapping to obtain the final resolutions. For example, consider the phrase "starbks crd" in a user utterance of the form, "my purchases using my starbks crd", needs to be resolved to the name "Starbucks Card". We might not be able to come up with all possible variations, mis-spellings, abbreviations, or synonyms of *Starbucks* and so we collect these from our training data (SVD ground truth labels tagged by human annotators) to improve our R2C mapping for recall. Then we use fuzzy string matching, specifically a length normalized Levenshtein distance, but other string similarity metrics could also work. The advantages of this Fuzzy string match approach are: - Relatively quick to execute, especially when the candidate list is small. - Needed when slot values and resolutions are more lexically than semantically similar e.g. people names, company names etc. The disadvantages are as follows: - If words have similar meaning but widely differ in characters (e.g. beverages and coffee) then simple string similarity is insufficient. While this can be somewhat mitigated by R2C augmentations from labelled data, one needs large enough dataset for this. - Does not use context surrounding the word hence cannot be utilized for contextual methods. ## A.2.2 Semantic Similarity With certain spans it might be preferable to use semantic information for disambiguation. For the coffee and beverage example above, using lexical similarity would lead to poor results if "coffee" was not a candidate in the R2C mapping for "beverage". The most intuitive method to do this is by training a phrase embedding model using classical techniques like word2vec (Mikolov et al., 2013b,a) or GloVe (Pennington et al., 2014) and calculating a phrase similarity of the raw slot value embedding against the candidates in the inverse R2C mapping. For very large number of candidates where pairwise similarity is impractical, approximate nearest neighbor (ANN) algorithms like FAISS (Johnson et al., 2019) or ScaNN (Guo et al., 2020) could be used. Finally, we also experimented with an alternative zero-shot approach which can be used in certain cases. Based on the method put forth in (Yin et al., 2019) we formulate our task into one of textual entailment, where spans and the candidates are converted into (premise, hypothesis) pairs using a predefined template, with high entailment score signifying semantic similarity. The advantages of using semantic similarity: - Works even when the candidate is not lexically similar to the span value mentioned in the customer utterance. - Can be used in unsupervised way but can also be fine-tuned to specific domain if training data is available. - These methods can be used for contextual SVD as well, if in-domain data is available. The disadvantages are as follows: - Might need in domain training data especially for specialized domains where unsupervised or self-supervised learning is insufficient. - On average are slower than a string based algorithms, although this is less of a problem in recent times due to availability of fast ANN algorithms as mentioned earlier. ## A.3 Ensemble Svd Re-Ranking We can chain multiple SVD components for the same slot value resolution, choosing the best resolution using pre-defined criteria such as majority voting. The decision of whether to execute all SVD ![9_image_0.png](9_image_0.png) components in the sequential chain can be based on confidence (conditional chaining) or not (unconditional chaining). Conditional chaining works as follows and is highlighted in figure 3: - The first SVD module returns the top candidates using a threshold. - If the confidence score exceeds a set value and/or the first N values are within a given ambiguity threshold we directly return the SVD outputs. - Else, we proceed to the next SVD module to help improve the final disambiguation, repeating this until the last available SVD module or until a high confidence prediction can be made.
chen-etal-2023-boosting
Boosting Transformers and Language Models for Clinical Prediction in Immunotherapy
https://aclanthology.org/2023.acl-industry.32
Clinical prediction is an essential task in the healthcare industry. However, the recent success of transformers, on which large language models are built, has not been extended to this domain. In this research, we explore the use of transformers and language models in prognostic prediction for immunotherapy using real-world patients{'} clinical data and molecular profiles. This paper investigates the potential of transformers to improve clinical prediction compared to conventional machine learning approaches and addresses the challenge of few-shot learning in predicting rare disease areas. The study benchmarks the efficacy of baselines and language models on prognostic prediction across multiple cancer types and investigates the impact of different pretrained language models under few-shot regimes. The results demonstrate significant improvements in accuracy and highlight the potential of NLP in clinical research to improve early detection and intervention for different diseases.
# Boosting Transformers And Language Models For Clinical Prediction In Immunotherapy Zekai Chen and **Mariann Micsinai Balan** and **Kevin Brown** Bristol-Myers Squibb, NJ, USA {zekai.chen}@bms.com ## Abstract Clinical prediction is an essential task in the healthcare industry. However, the recent success of transformers, on which large language models are built, has not been extended to this domain. In this research, we explore the use of transformers and language models in prognostic prediction for immunotherapy using real-world patients' clinical data and molecular profiles. This paper investigates the potential of transformers to improve clinical prediction compared to conventional machine learning approaches and addresses the challenge of few-shot learning in predicting rare disease areas. The study benchmarks the efficacy of baselines and language models on prognostic prediction across multiple cancer types and investigates the impact of different pretrained language models under few-shot regimes. The results demonstrate significant improvements in accuracy and highlight the potential of NLP in clinical research to improve early detection and intervention for different diseases. ## 1 Introduction Predicting and measuring treatment response is among the most fundamental tasks in clinical medicine. Particularly, in cancer immunotherapy (Pardoll, 2012), antibodies against programmed death-1/programmed death ligand 1 (PD1/PD-L1) have led to US FDA approval of several PD-1/PD-L1 treatment strategies for patients with metastatic cancer. However, not all patients derive clinical benefits (Topalian et al., 2016), emphasizing the need to identify who will respond to immunotherapy (Chowell et al., 2021). Thus, accurate treatment response and disease progress forecast based on the patient's clinical features and molecular profile will effectively improve the treatment efficiency and spur the development of precise medication. In order to facilitate medical decisionmaking and health outcomes, clinical prediction models (Steyerberg, 2008; Smeden et al., 2021) ![0_image_0.png](0_image_0.png) play an increasingly crucial role in contemporary clinical care by informing professionals, patients, and their relatives about outcome risks. Given the fact that most clinical data is stored in tabular form, current mainstream machine learning approaches (Topol, 2019; Rajkomar et al., 2019) to cancer prognosis (Chowell et al., 2021) are still tree-based ensemble models such as boosting (Chen and Guestrin, 2016; Ke et al., 2017) and bagging (Breiman, 2004; Ishwaran et al., 2019). In contrast, transformers (Vaswani et al., 2017) have revolutionized enormous fields including natural language processing (NLP) (Devlin et al., 2019; Brown et al., 2020) and computer vision (Dosovitskiy et al., 2021). Many attempts to apply transformers on tabular modeling (*e.g.*, TabTransformer Huang et al., 2020) have also achieved success. Considering that the disparity between clinical data and other natural tabular data is not large, it is appealing that we can also translate this success from other domains to clinical prediction. As such, we seek to answer the first question in this 332 Serialization Finetuning ![1_image_0.png](1_image_0.png) paper: *To what extent can transformers promote* the performance of clinical prediction compared to conventional machine learning approaches? Although transformers have advantages in modeling high-dimensional tabular data thanks to the capacity of long-distance dependency modeling, their efficacy can still be hampered when labeled data is scarce given the nature of data-hungry and low inductive bias (d'Ascoli et al., 2021). This could be vital to predicting many rare disease areas where historical patient records are extremely limited (Haendel et al., 2019). Our pilot investigations (see Figure 1) also confirmed this. Meanwhile, we seek to provide a systematic solution to the clinical prediction that functions both in the presence and absence of much labeled data. Recently, large language models (LLMs) built as a stack of transformers such as BERT (Devlin et al., 2019), GPT3 (Brown et al., 2020) provide a viable direction. The simple and scalable self-supervised learning (*e.g.*, masked signal prediction (Devlin et al., 2019; Chen et al., 2022)) on a nearly unlimited corpus of text (*e.g.*, PubMed1, PMC2) has led LLMs to not only continuous performance improvements but also a surprising emergence of in-context learning capability, which is especially powerful under settings with only a small number of learning samples also known as few-shot learning (Snell et al., 2017; Sanh et al., 2022). Though recent work has demonstrated that LLMs are good few-shot clinical information extractors (Agrawal et al., 2022), this success has yet not been extended to tasks with a higher precision requirement, such as cancer prognostic prediction. In this work, we therefore seek to address this second question: How can language models boost clinical prediction in few-shot settings? In addressing these questions, we conduct a benchmarking study on a real-world clinical dataset MSK-IMPACT (Chowell et al., 2021) to assess the efficacy of a set of baselines and LLMs on prognostic prediction across multiple cancer types (melanoma, NSCLC, bladder, etc.). More importantly, we explore how different pretrained LLMs using different knowledge resources (domainspecific or domain-agnostic) may affect the downstream performance of clinical prediction, especially under few-shot settings. Our results show significant improvements in accuracy through overall survival, progression-free survival and best overall response prediction across multiple disease types. ## 2 Llms For Few-Shot Clinical Prediction Figure 2 is an overview of applying LLMs for clinical prediction. As discussed in Section 1, purely supervised learning via transformer encoders is often hampered when training samples are limited. LLMs provide a viable direction with astonishing in-context learning capability that exploits knowledge from other resources to downstream tasks with minimal tuning. Serialization. To leverage LLMs on clinical tabular data, the feature columns must be serialized into sequences of natural language tokens that LLMs can comprehend and encode. Recently, there have been a few trials (Yin et al., 2020; Bertsimas et al., 2022) investigating various serialization techniques and exploring the corresponding performance across different tasks, which turns out that LLMs for tabular modeling rely more on the correct values than the structure of the features (Hegselmann et al., 2022). To avoid repetitive work, in this work, we focus more on how different pretrained LLMs using different knowledge sources may affect the prediction performance by simply following a manual serialization template, The {attribute} is {value}., which has been proven to generate competitive results compared to other LLMs prompting-based regeneration methods by Hegselmann et al. (2022). Knowledge Sources. The pretraining corpus is also known as the knowledge source for LLMs. Clinical language is notably different from the standard NLP text in terms of vocabulary and syntax (Wu et al., 2019). As a result, following advancements in language modeling from the larger NLP community, the clinical NLP sub-community frequently trains domain-specific models on clinical corpora. Following BERT (Devlin et al., 2019), various clinical and biomedical versions appeared quickly, including BioBERT (Lee et al., 2019), ClinicalBERT (Alsentzer et al., 2019), SciBERT (Beltagy et al., 2019), PubMedBERT (Gu et al., 2020), etc. However, domain-agnostic LLMs like GPT-3 have so far been unable to achieve competitive results on biomedical NLP tasks (Moradi et al., 2021; Gutierrez et al., 2022), revealing the fact that the relevance and the knowledge reservation of pretraining sources have a significant impact to the knowledge migration in downstream tasks (*e.g.*, finetuning or prompting). Thus, we aim to evaluate the downstream performance in few-shot settings with a few different LLMs pretrained on different resources and benchmark the gaps. Omnivorous Loss Objective. Compared to conventional machine learning approaches, deep learning allows efficient end-to-end learning of image/text encoders in the presence of multi-modality along with tabular data benefiting from the modularized design. More importantly, the customized loss objectives corresponding to different tasks can often be combined for joint training, also known as multi-task learning (Ruder, 2017). The inductive transfer across related tasks can help improve a model by introducing an inductive bias, which causes a model to prefer some hypotheses over others, that generally leads to solutions that generalize better. In cancer prognostic prediction, we usually have multiple endpoints to predict. For example, *overall survival* (OS), progression-free survival (PFS), and *best overall response* (BOR), etc. As such, in this work, we consistently adopt a join learning paradigm that merges multiple endpoints into one unified loss objective Lf for all studies using the following term: $$L_{f}=\sum_{i}^{I}\alpha_{i}\ell_{i}$$ αi`i (1) ![2_image_0.png](2_image_0.png) where I is the total number of tasks and αi represents the soft weight for any task i. More specifically, in our experiments, we adopt CrossEntropy loss for BOR and CoxPH loss for OS and PFS prediction following DeepSurv (Katzman et al., 2018). ## 3 Experiments And Results Data. This dataset is acquired by Memorial Sloan Kettering Cancer Center (MSKCC) from a comprehensively curated cohort (MSK-IMPACT) with 1,479 patients treated with immune checkpoint blockade (ICB) across 16 different cancer types (Chowell et al., 2021), where patients are either responder (R) or non-responders (NR) to the treatment (PD-1/PD-L1 inhibitors, CTLA-4 blockade or a combination) based on Response Evaluation Criteria in Solid Tumors (RECIST) v1.1 (Eisenhauer et al., 2009) or best overall response on imaging. Each patient was collected up to 16 biological features, including genomic, molecular, clinical, and demographic variables. The train set contains 1,184 patients, and the test set contains 295 patients. The evaluation target is to predict *clinical response* to immunotherapy (binary classification) and both *overall survival* and progression-free survival (regression) in the test data across different cancer. Transformers for Tabular Modeling. As we need to compare with transformer baselines, we also introduce ClinTaT (see Figure 3 right) with some improvements based on the original TabTransformer (Huang et al., 2020), including 1) adding a continuous embedding layer which is consisted of several independent linear layers corresponding to the number of continuous features; 2) directly concatenating the embedded categorical $$(1)$$ ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) and continuous variables together, and feed them into the transformer instead of only categorical variables. Training settings. For fair comparison, we adopt a hidden dimensionality of 768 for both ClinTaT and BERTs (base versions). Specifically, ClinTaT is a stack of 6 transformer encoder layers with 8 heads. To prevent overfitting, we set the attention dropout rate to 0.3 and feedforward dropout rate to 0.1. For BERTs, all layers are frozen while we add one independent encoder on top of it to finetune. In the main figures and tables, we utilize a single linear layer to demonstrate the feasibility of LLMs for few-shot regimes. In ablation studies, we also investigate other encoder types such as another small transformer encoder. The optimizer of AdamW is adopted consistently for all trainings, and the basic learning rates for ClinTaT and BERTs are 1.25e−4and 1.25e−5 with a weight decay of 0.01, correspondingly. A linear warmup (up to 5 epochs with a total training of 200 epochs) with cosine annealing strategy (warmup learning rate is set to 2.5e−7) is also applied. For other machine learning baselines, we utilize the grid search to find the optimal hyper-parameters and report the best results. More details can be found in the appedix. How do transformers promote clinical prediction performance? We first calculated the area under the receiver operating characteristic (ROC) curves using the response probabilities computed by transformers and other baselines. Our proposed ClinTaT achieved superior performance on the test set, as indicated by the area under the curve (AUC), in predicting responders and non-responders across cancer types compared to conventional machine learning models such as logistic regression, random forest, and XgBoost, suggesting that the selfattention mechanism for long-range dependency modeling contributed to the overall prediction performance. (Figure 4, Table 1 using all samples). Furthermore, the differences in OS between responders and non-responders predicted by transformers were significantly higher than differences between responder and non-responder groups predicted by other baselines across various cancer types (Figure 5). Especially for the predicted non-responders, the predicted survival curves almost fit the groundtruth ones perfectly, while it is interesting to observe that transformers tend to underestimate the response probability with an attempt to balance out the prediction performance across different cancer types compared to other baselines (0.809 of | Model | Number of Samples | | | | | | | | | |--------------|---------------------|-------|-------|-------|-------|-------|-------|-------|-------| | 6 | 12 | 18 | 24 | 30 | 36 | 42 | 48 | all | | | LogRes | 0.534 | 0.535 | 0.573 | 0.511 | 0.527 | 0.570 | 0.601 | 0.678 | 0.758 | | RandomForest | 0.643 | 0.527 | 0.672 | 0.539 | 0.594 | 0.667 | 0.651 | 0.701 | 0.795 | | XgBoost | 0.500 | 0.602 | 0.670 | 0.586 | 0.613 | 0.664 | 0.651 | 0.681 | 0.796 | | ClinTaTours | 0.641 | 0.619 | 0.653 | 0.607 | 0.584 | 0.659 | 0.664 | 0.676 | 0.815 | Table 1: Test AUC performance on treatment response prediction of ClinTaT and other baselines on MSK-IMPACT. Each column reports the k-shot performance for different values of k. ClinTaT outperforms other traditional approaches with all training samples, however *not significant* in the most few-shot regimes. Model Number of Samples 6 12 18 24 30 36 42 48 all LogRes 0.500 0.503 0.551 0.511 0.545 0.557 0.549 0.564 0.649 RandomForest **0.637** 0.502 **0.614** 0.536 0.591 0.610 0.626 0.631 0.682 XgBoost 0.500 0.555 0.601 0.539 **0.618** 0.628 0.614 0.609 0.688 ClinTaTours 0.583 **0.615 0.614 0.639** 0.610 **0.647 0.643 0.645 0.724** Table 2: Test **C-index** performance on *Overall Survival* prediction of ClinTaT and other baselines on MSK-IMPACT. ClinTaT generally outperforms other traditional approaches under many settings, however still *not significant* in the very-few-shot regime (*e.g.*, ≤ 6 samples). Model Number of Samples 6 12 18 24 30 36 42 48 all LogRes 0.515 0.513 0.538 0.514 0.537 0.549 0.565 0.596 0.648 RandomForest **0.611 0.529 0.612** 0.532 0.580 0.619 **0.615 0.627** 0.666 XgBoost 0.500 0.514 0.594 **0.569 0.600 0.619** 0.612 0.620 0.671 ClinTaTours 0.585 0.505 0.547 0.520 0.538 0.553 0.555 0.617 **0.684** Table 3: Test **C-index** performance on *Progression-free Survival* prediction of ClinTaT and other baselines on MSK-IMPACT. ClinTaT performs better than other approaches only with all training samples. ClinTaT versus 0.828 of XGB in Fig. 4). It is additionally beneficial to rare diseases prediction when the training sample pool is not large. To test whether our approach could also predict overall survival (OS) before the administration of immunotherapy, we further calculated the concordance index (C-index) for OS and PFS, which ranges between 0 and 1 (0.5 being random performance). We found that the C-indices of the ClinTaT predictions were significantly higher than those generated by other baselines (Table 2, pan-cancer C-index 0.724 for ClinTaT versus 0.688 for XgBoost versus 0.682 for Random Forest, p < 0.05; Table 3, pan-cancer C-index 0.684 for ClinTaT versus 0.671 for XgBoost versus 0.666 for Random Forest, p < 0.05). These results demonstrate that the transformers can accurately forecast response, OS, and PFS before administering immunotherapy. However, Table 1, 2 and 3 also show that under settings with only a small number of samples, the prediction capability of transformers does not generalize well (*e.g.*, 0.583 for ClinTaT versus 0.637 for Random Forest with only 6 samples on OS prediction; 0.585 for ClinTaT versus 0.611 for Random Forest with only 6 samples on PFS prediction) due to the nature of data-hungry and low inductive bias (discussed in Section 1). How do LLMs boost few-shot learning? Table 4 shows the performance of different BERTs pretrained on different resource corpus followed by a *single linear layer* for finetuning using only [cls] token on MSK-IMPACT test data (averaged over three seeds). The PubMedBERT (Gu et al., 2020) outperforms all other variants and the baseline transformer across all k-shot settings with an average of improvements over 5%. In the very few shot settings (4 samples), the language model finetuning shows significant improvements over the baseline (Table 4, 9.4%), indicating the benefit of the capability of knowledge transferring to downstream tasks brought by LLMs when samples are insufficient. Also, our results indicate that the sam- | Model | Number of Samples | | | | | | | | |-----------------------------------|---------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------| | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | | | ClinTaTbaseline | 0.593 | 0.641 | 0.638 | 0.628 | 0.619 | 0.643 | 0.639 | 0.653 | | BERT (Devlin et al., 2019) | 0.590 | 0.618 | 0.652 | 0.636 | 0.633 | 0.637 | 0.632 | 0.631 | | BioBERT (Lee et al., 2019) | 0.570 | 0.512 | 0.527 | 0.532 | 0.536 | 0.532 | 0.524 | 0.530 | | SciBERT (Beltagy et al., 2019) | 0.506 | 0.506 | 0.578 | 0.577 | 0.560 | 0.549 | 0.513 | 0.557 | | ClinBERT (Alsentzer et al., 2019) | 0.604 | 0.550 | 0.545 | 0.560 | 0.567 | 0.576 | 0.574 | 0.558 | | PubMedBERT (Gu et al., 2020) | 0.649(↑9.4%) | 0.643(↑0.3%) | 0.641(↑0.5%) | 0.657(↑4.6%) | 0.663(↑7.1%) | 0.677(↑5.3%) | 0.695(↑8.8%) | 0.685(↑4.9%) | Backbone Encoder AUC COS CPFS BERT linear 0.725 0.593 0.622 transformer 0.773 0.699 0.657 BioBERT linear 0.678 0.590 0.625 transformer 0.766 0.707 **0.672** SciBERT linear 0.689 0.588 0.620 transformer **0.786** 0.711 0.656 ClinBERT linear 0.669 0.591 0.616 transformer 0.751 **0.719** 0.665 PubMedBERT linear 0.745 0.599 0.634 transformer 0.771 0.700 0.662 ple efficiency of using LLMs' embeddings is highly domain knowledge dependent. The performance of SciBERT is worse than that of BioBERT and ClinicalBERT as SciBERT was pretrained on all semantic scholar 1.14M articles towards a more general scientific knowledge learning. In contrast, BioBERT and ClinicalBERT were pretrained on the more domain-specific corpus, such as PubMed, PMC, and clinical MIMIC III notes3. However, we cannot claim that domainspecific pretraining is necessary for all clinical prediction tasks as Table 4 also reveals that vanilla BERT is the second best and performs even better than SciBERT pretrained on medical and computer science articles. As we know, vanilla BERT learns more general knowledge understanding from domain-agnostic corpora such as Wikipedia and Book corpus. One of our preliminary conjectures is that domain-specific knowledge transfer is superior when the pretraining corpus is sufficiently profound. However, the generalization capability learned by domain-agnostic models also works under scenarios where the resource knowledge is neither domain-agnostic nor morally domain-specific. Additionally, the performance down gradation on BioBERT and ClinicalBERT compared to PubMedBERT released more interesting findings as PubMedBERT was pretraining from scratch. At the same time, the other two models were pretrained by inheriting vanilla BERT and BioBERT v1.04, correspondingly. Gu et al. (2020) has also pointed out that pretraining only sometimes benefits from more text, including out-domain text. The prior biomedical-related BERT models have yet to be pretrained using purely biomedical text. Our Table 4 also shows that domain-specific pretraining from scratch can be superior to mixed-domain pretraining for downstream applications. Though all the results in Table 4 are generated by adding one single linear layer on top of LLMs for finetuning, we conduct more ablation studies in Table 5 to evaluate the performance change using different encoders (see Figure 2). The transformer in Table 5 consists of only the transformer encoder of a depth of six layers with a dimension of 768. The results indicate that adding compute complexity to LLMs can still lift the semantic representation learning of clinical features, as transformer architecture performs better than a superficial linear layer. It also provides an alternative way to reexamine the right *size* of LLMs and inspires us for the next step, which is to adopt more scaled LLMs such as PubMedGPT5, GPT-3 or T5 (Raffel et al., 2019) for clinical prediction. ## 4 Limitations This study is based on a single clinical cohort consisted of 1479 patients, which may limit the generalizability of the results to other clinical cohorts. This specific cohort of patients may not be representative enough of the general population, which may inject certain level of bias brought by the dissimilar distributions of gender, age, race, etc. While we envision the generalization capability of the language models is applicable to other clinical prediction tasks, the focus of this work is majorly about prognostic prediction of cancer immunotherapy, and we hereby have not provided solid evidence to prove that the success can also be extended to other relevant trials. Additionally, we have yet only compared a limited set of transformers and language models, and it is possible that other models may perform better on the tasks evaluated in this study. Finally, it is important to note that while the models in this study achieve high accuracy in clinical prediction, the ultimate value of these models in improving patient outcomes will depend on how well they are integrated into clinical decision-making processes and the impact they have on patient care. ## 5 Ethical Considerations As this work uses real-world patients' clinical data and molecular profiles, which may raise concerns about data privacy and confidentiality. We ensure that all the patients' data is de-identified and protected from unauthorized access and use. The public patient data 6 was approved by the Memorial Sloan Kettering Cancer Center (MSKCC) 7institutional review board for scientific use. Researchers have ensured that they obtain proper ethical approval and informed consent from patients before using their data. Even though this is a dataset that has been carefully curated to prevent the negative impact brought by human bias, there maybe existing a risk of introducing bias into the clinical cohort of data we analyze, particularly in the selection of patients and the choice of clinical features and molecular profiles. Additionally, the use of predictive models to guide clinical decision-making might raise concerns about fair access to healthcare. We hereby ensure that the use of predictive models does not result in the inequitable distribution of healthcare resources and that patients from all socioeconomic backgrounds have equal access to the best possible care. This study uses natural language processing and machine learning algorithms to predict disease prognosis, which may raise broader ethical considerations related to the responsible use of technology in healthcare. We ensure that the use of all approaches discussed in this work is guided by general ethical principles, such as transparency, accountability, and patient-centered care. Even though we focus on relatively large scale language models in this work, our finetuning strategy only requires a considerably small amount of computation as only the encoder part needs to be finetuned. In practice, the single linear layer finetuning can be obtained in about 2 hours on a machine with single Nvidia A10 GPU; training completes within 5 hours on a machine with one Nvidia A10 GPU for another transformer encoder with a depth of 6 and dimensionality of 768. All the pretrained language model weights are publicly available (*e.g.*, huggingface). ## References Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David A. Sontag. 2022. Large language models are zero-shot clinical information extractors. *ArXiv*, abs/2205.12689. Emily Alsentzer, John R. Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. 2019. Publicly available clinical bert embeddings. *ArXiv*, abs/1904.03323. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. EMNLP. Dimitris Bertsimas, Kimberly Villalobos Carballo, Yu Ma, Liangyuan Na, Léonard Boussioux, Cynthia Zeng, Luis R. Soenksen, and Ignacio Fuentes. 2022. Tabtext: a systematic approach to aggregate knowledge across tabular data structures. *ArXiv*, abs/2206.10381. L. Breiman. 2004. Random forests. *Machine Learning*, 45:5–32. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *NeurIPS*, abs/2005.14165. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. *SIGKDD*. Zekai Chen, Devansh Agarwal, Kshitij Aggarwal, Wiem Safta, Mariann Micsinai Balan, Venkat S. Sethuraman, and Kevin Brown. 2022. Masked image modeling advances 3d medical image analysis. *WACV*, abs/2204.11716. Diego Chowell, Seong-Keun Yoo, Cristina Valero, Alessandro Pastore, Chirag Krishna, Mark Lee, Douglas R. Hoen, Hongyu Shi, Daniel W. Kelly, Neal Patel, Vladimir Makarov, Xiaoxiao Ma, Lynda Vuong, Erich Sabio, Kate Weiss, Fengshen Kuo, Tobias L. Lenz, Robert M. Samstein, Nadeem Riaz, Prasad S. Adusumilli, Vinod P. Balachandran, George Plitas, A. Ari Hakimi, Omar Abdel-Wahab, Alexander N. Shoushtari, Michael A. Postow, R. Motzer, Marc Ladanyi, Ahmet Zehir, Michael F. Berger, Mithat Gönen, Luc G. T. Morris, Nils Weinhold, and Timothy A. Chan. 2021. Improved prediction of immune checkpoint blockade efficacy across multiple cancer types. *Nature biotechnology*. Stéphane d'Ascoli, Hugo Touvron, Matthew L. Leavitt, Ari S. Morcos, Giulio Biroli, and Levent Sagun. 2021. Convit: improving vision transformers with soft convolutional inductive biases. *ICLR*, 2022. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. *NAACL*, abs/1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. *ICLR*, abs/2010.11929. E. A. Eisenhauer, Patrick Therasse, Jan Bogaerts, Lawrence H. Schwartz, Daniel J. Sargent, Robert Ford, Janet E. Dancey, Susan G. Arbuck, S. Gwyther, Margaret Mooney, Larry V. Rubinstein, Lalitha K Shankar, Lori E. Dodd, Richard S. Kaplan, Denis Lacombe, and Jaap Verweij. 2009. New response evaluation criteria in solid tumours: revised recist guideline (version 1.1). *European journal of cancer*, 45 2:228–47. Yuxian Gu, Robert Tinn, Hao Cheng, Michael R. Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing. *ACM Transactions on* Computing for Healthcare (HEALTH), 3:1 - 23. Bernal Jimenez Gutierrez, Nikolas McNeal, Clay Washington, You Chen, Lang Li, Huan Sun, and Yu Su. 2022. Thinking about gpt-3 in-context learning for biomedical ie? think again. *ArXiv*, abs/2203.08410. Melissa A. Haendel, N Vasilevsky, Deepak R. Unni, Cristian G Bologa, Nomi L. Harris, Heidi L. Rehm, Ada Hamosh, Gareth S. Baynam, Tudor Groza, Julie A. McMurry, Hugh J. S. Dawkins, Ana Rath, Courtney Thaxon, Giovanni Bocci, marcin p. joachimiak, Sebastian Köhler, Peter N. Robinson, Chris J. Mungall, and Tudor I. Oprea. 2019. How many rare diseases are there? *Nature Reviews Drug* Discovery, 19:77–78. Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David A. Sontag. 2022. Tabllm: Few-shot classification of tabular data with large language models. *ArXiv*, abs/2210.10723. Xin Huang, Ashish Khetan, Milan W. Cvitkovic, and Zohar S. Karnin. 2020. Tabtransformer: Tabular data modeling using contextual embeddings. *ArXiv*, abs/2012.06678. Hemant Ishwaran, Udaya B. Kogalur, Eugene H. Blackstone, and Michael S. Lauer. 2019. Random survival forests. *Wiley StatsRef: Statistics Reference Online*. Jared Katzman, Uri Shaham, Alexander Cloninger, Jonathan Bates, Tingting Jiang, and Yuval Kluger. 2018. Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. *BMC Medical Research Methodology*, 18. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In *NeurIPS*. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36:1234 - 1240. Milad Moradi, Kathrin Blagec, Florian Haberl, and Matthias Samwald. 2021. Gpt-3 models are poor few-shot learners in the biomedical domain. *ArXiv*, abs/2109.02555. Drew M. Pardoll. 2012. The blockade of immune checkpoints in cancer immunotherapy. *Nature Reviews* Cancer, 12:252–264. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *ArXiv*, abs/1910.10683. Alvin Rajkomar, Jeffrey Dean, and Isaac S. Kohane. 2019. Machine learning in medicine. *The New England Journal of Medicine*, 380:1347–1358. Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. *ArXiv*, abs/1706.05098. Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Stella Rose Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. *ICLR*, abs/2110.08207. Maarten Van Smeden, Johannes B. Reitsma, Richard D. Riley, Gary Stephen Collins, and Karel G. M. Moons. 2021. Clinical prediction models: diagnosis versus prognosis. *Journal of clinical epidemiology*, 132:142–145. Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. NeurIPS, abs/1703.05175. Ewout Willem Steyerberg. 2008. Clinical prediction models: A practical approach to development, validation, and updating. In *Springer*. Suzanne L. Topalian, Janis M. Taube, Robert A Anders, and Drew M. Pardoll. 2016. Mechanism-driven biomarkers to guide immune checkpoint blockade in cancer therapy. *Nature Reviews Cancer*, 16:275–287. Eric J. Topol. 2019. High-performance medicine: the convergence of human and artificial intelligence. *Nature Medicine*, 25:44–56. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *NeurIPS*, abs/1706.03762. Stephen T Wu, Kirk Roberts, Surabhi Datta, Jingcheng Du, Zongcheng Ji, Yuqi Si, Sarvesh Soni, Qiong Wang, Qiang Wei, Yang Xiang, Bo Zhao, and Hua Xu. 2019. Deep learning in clinical natural language processing: a methodical review. Journal of the American Medical Informatics Association : JAMIA. Pengcheng Yin, Graham Neubig, Wen tau Yih, and Sebastian Riedel. 2020. Tabert: Pretraining for joint understanding of textual and tabular data. ACL, abs/2005.08314.
yuksel-etal-2023-evolvemt
{E}volve{MT}: an Ensemble {MT} Engine Improving Itself with Usage Only
https://aclanthology.org/2023.acl-industry.33
This work proposes a method named EvolveMT for the efficient combination of multiple machine translation (MT) engines. The method selects the output from one engine for each segment, using online learning techniques to predict the most appropriate system for each translation request. A neural quality estimation metric supervises the method without requiring reference translations. The method{'}s online learning capability enables it to adapt to changes in the domain or MT engines dynamically, eliminating the requirement for retraining. The method selects a subset of translation engines to be called based on the source sentence features. The degree of exploration is configurable according to the desired quality-cost trade-off. Results from custom datasets demonstrate that EvolveMT achieves similar translation accuracy at a lower cost than selecting the best translation of each segment from all translations using an MT quality estimator. To the best of our knowledge, EvolveMT is the first MT system that adapts itself after deployment to incoming translation requests from the production environment without needing costly retraining on human feedback.
# Evolvemt: An Ensemble Mt Engine Improving Itself With Usage Only Kamer Ali Yuksel, Ahmet Gunduz, Mohamed Al-Badrashiny, Shreyas Sharma, **and Hassan Sawaf** aiXplain Inc., 16535 Grant Bishop Lane, Los Gatos, CA 95032, US {kamer, ahmet, mohamed, shreyas, hassan}@aixplain.com ## Abstract This paper presents EvolveMT for efficiently combining multiple machine translation (MT) engines. The proposed system selects the output from a single engine for each segment by utilizing online learning techniques to predict the most suitable system for every translation request. A neural quality estimation metric supervises the method without requiring reference translations. The online learning capability of this system allows for dynamic adaptation to alterations in the domain or machine translation engines, thereby obviating the necessity for additional training. EvolveMT selects a subset of translation engines to be called based on the source sentence features. The degree of exploration is configurable according to the desired quality-cost trade-off. Results from custom datasets demonstrate that EvolveMT achieves similar translation accuracy at a lower cost than selecting the best translation of each segment from all translations using an MT quality estimator. To our knowledge, EvolveMT is the first meta MT system that adapts itself after deployment to incoming translation requests from the production environment without needing costly retraining on human feedback. ## 1 Introduction Machine Translation (MT) has experienced substantial progress in recent years, resulting in improving accuracy and more human-like translation output. Despite these advancements, challenges remain, particularly in ensemble modeling. Ensemble models integrate predictions from multiple individual models to achieve a more accurate final output. However, the effective combination of these models is often a complex task that requires thoughtful consideration of factors such as the model architecture, training data, and prediction combination methods. One of the significant challenges in MT ensembling is that the training data used to train the ensemble model, may not be fully representative of the data to be translated later, leading to a mismatch between the model and the data. This paper presents EvolveMT, a method that addresses data drift in ensemble models by continual self-adaptation for optimal performance during usage. In the subsequent section, we review existing machine translation (MT) quality estimation metrics in the literature, which have been trained on human evaluation or post-editing datasets. In the Approach section, we present a comprehensive explanation of the proposed method. In the Experiments section, we describe our experimental design and provide quantifiable results demonstrating the enhancement resulting from the application of the proposed method, as compared to state-of-the-art quality estimation metrics. Finally, we discuss the obtained results and present our conclusions. ## 2 Related Work In the WMT20 Metrics Shared Task (Mathur et al., 2020), four reference-free metrics were submitted to evaluate machine translation outputs in the news translation task. These metrics use bilingual mapping of contextual embeddings from language models such as XLM-RoBERTa (Conneau et al., 2019) to assess cross-lingual semantic similarity. However, they often struggle to accurately differentiate between human and machine translations, except for COMET-QE (Rei et al., 2020), the only reference-free metric capable of doing so. The study by Freitag et al. (2021a) evaluated top MT systems from WMT 2020 using Multidimensional Quality Metrics (MQM) and professional translator annotations. Their results showed a low correlation between crowd worker evaluations and MQM, leading to different rankings and questioning previous conclusions. The study also found that automatic metrics based on pre-trained embeddings can outperform human crowd workers, suggesting that models trained with crowd-sourced 341 human evaluations may have higher accuracy. The WMT21 Metrics Shared Task (Freitag et al., 2021b), used MQM expert-based human evaluation to acquire reliable ratings, and evaluate metrics on news and TED talk translations produced by MT systems. Results showed reference-free metrics COMET-QE and OpenKiwi performed well in scoring human translations but not as well with MT outputs, and were strong at segment-level human translation evaluation while competitive with reference-based metrics in system-level evaluation. REGEMT (Štefánik et al., 2021) is a referencefree metric in WMT21 that uses an ensemble of surface, syntactic, and semantic similarity metrics as input to a regression model. As demonstrated by CushLEPOR (Han et al., 2021), it allows customization, outperforming lexical semantic similarity-based metrics with a higher computational cost. Onception (Mendonça et al., 2022) used active learning to converge an MT ensemble in a production environment to the best MT with evaluations acquired online. (Naradowsky et al., 2020) used bandit-learning to adapt MT policies based on simulated user feedback, outperforming the best single MT in mixeddomain settings. A contextual bandit strategy was proposed to make instance-specific decisions, but the system still required a human-in-the-loop (HITL) process. ## 3 Approach EvolveMT is a quasi MT ensemble technique. In contrast to the traditional multi-system MT approach, which combines outputs from multiple MT systems to enhance translation accuracy and fluency, EvolveMT prioritizes the selection of the most optimal translation from a finite set of MT systems, as we demonstrate in this section. Figure 1 below shows the system architecture of EvolveMT. The system is centered around a multi-class classification model that drives multiple processes to select the best MT model for translation requests. For each incoming machine translation request, we use SpaCy (Honnibal and Montani, 2017) and Stanza (Qi et al., 2020) frameworks to extract morphological and lexical features. These features include the count of tokens, characters, and the average word length, as well as the frequency of Part-of-Speech labels (such as nouns, verbs, adjectives, etc.), the frequency of Named Entity Recognition labels (including entities such as persons, locations, organizations, etc.), and the frequency of morphological features (e.g. gender and aspect). These features are combined with the 1024-dimensional embedding vector generated by the XLM-RoBERTa encoder of the COMETQE model and stored alongside the input sentence in the *Ranked Batch Requests Queue*. This queue serves the purpose of prioritizing translation requests that necessitate precedence in processing. At the outset, requests in the Ranked Batch Requests Queue are ranked based on the order in which they are added. The highest-ranked Machine Translation (MT) request is selected for translation. The *Multi-class MT Classifier* employs the extracted features of the selected MT request to determine the MT systems to be utilized. The classifier prioritizes MT systems with a higher probability of having a higher COMET-QE value. Exploration of additional MT systems becomes more likely only if the probabilities from the classifier's prediction exhibit high entropy. This enables EvolveMT to minimize the cost of exploration when the best MT is predicted with high certainty. Finally, the selected MT systems are utilized to translate the MT request, and the COMET-QE score is calculated for each translation. The translation with the highest score is chosen and returned in response to the MT request. The Multi-class MT Classifier is then updated online with the best MT system, as determined by the COMET-QE score, serving as a label for the extracted features of the MT request. The online machine learning (ML) functionality of FLAML AutoML framework (Wang et al., 2021) is utilized for online learning. This capability enables the optimization of model hyper-parameters during the iterative course of ML, facilitating continual ML without repetitively hyper-tuning the classifier from scratch. Subsequently, the classifier is employed to rerank the Ranked Batch Requests Queue, based on the uncertainty of the classifier, with requests having higher entropy being placed at the top of the queue for prioritized translation. Getting the MT request with the maximum entropy from the queue after each iteration, helps prioritize the most informative sample for the iterative training of the classifier. As the classifier improves its ability to predict the best MT model for MT requests via learning, it reduces the likelihood of exploring other MT(s). The driving algorithm in EvolveMT, which outlines the primary process of the proposed method, is presented in pseudo-code form in Algorithm 1. ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) ## 4 Experiments 4.1 Data A multi-lingual corpus of 37,500 human-translated sentences in Czech, German, and Russian, along with their corresponding English source-texts, was collected for the OPUS repository (Lison and Tiedemann, 2016; Aulamo and Tiedemann, 2019) using stratified random sampling for each language and dataset. To evaluate EvolveMT, translations for each sentence in the corpus were obtained from one open-source machine translation system (Tiedemann et al., 2022) and five major machine translation service providers in the industry (Google, Azure, AWS, ModernMT, and AppTek). ## 4.2 Setup Experiments were conducted on a 64-bit Ubuntu 22.04 LTS computer system with an AMD Ryzten 5950X CPU (16 processors, 32 threads) and 64GB of memory. An Nvidia GeForce RTX GPU was used for XLM-RoBERTa embedding extraction from the fine-tuned COMET-QE encoder. The results showed an average 2.88 (±0.06) millisecond response time for the EvolveMT system to return an MT output and update its classifier when the MT request queue contained a single item. Depending on GPU usage, the feature extraction time was (183.43 -244.80) milliseconds. It's worth noting that in a production setting, feature extraction can be performed in parallel for multiple MT requests. ## 4.3 Evaluation Criteria And Baselines In this paper, grid search is used to evaluate the impact of two hyperparameters, *maxMT s* and α, on the classifier's performance. *maxMT s* refers to the maximum number of machine translation systems the classifier can select, and α is the maximum entropy threshold (as described in Algorithm 1). The grid search involves varying *maxMT s* from 1 to 6 (the maximum number of the MT systems we are using), and α from 0.1 to 1.0 with increments of 0.1. The experimental results are obtained by averaging 100 repetitions to account for the method's inherent stochasticity. For clarity in the results section, we present the results over the *maxMT s* range while setting α to its optimal value of 0.2, determined through the grid search. For the evaluation, we adopt the reference-based quality score COMET-DA, as detailed in (Rei et al., 2020), as the evaluation metric for our ensemble output. This choice is motivated by the results of prior research which have demonstrated that COMET-DA exhibits a higher correlation with human evaluation scores compared to other widely used machine translation metrics, such as BLEU and METEOR. The evaluation of EvolveMT is being conducted against the following baselines: - COMET-QE Ensemble: translation is performed utilizing the six MT systems. The translation with the highest COMET-QE score is selected for each input sentence as the ensemble translation. Then, the COMET-DA score is calculated using selected translations. - Best MT: involves translating the entire data using all six MT systems. The MT system that produces the highest overall COMET-DA is then selected as the Best MT to employ. ## 4.4 Results The comparison of COMET-DA scores of the COMET-QE ensemble and Best MT concerning various variants of EvolveMT with varying *M axMT s* values are presented in Table 1. The results are depicted for the three language pairs of English-to-Czech, English-to-German, and English-to-Russian. In addition, the average translation cost of each system across the three languages is also documented in the table. The findings indicate that EvolveMT approximates the COMET-QE ensemble's quality while incurring significantly lower costs. Furthermore, the results in the table reveal that the optimal cost-quality trade-off for EvolveMT varies depending on the target language. Specifically, for all three language pairs, it can be observed that EvolveMT with *M axMT s* = 3 and M axMT s = 4 provide the best balance between cost and quality when compared to other individual and ensemble MTs. As *M axMT s* increases, EvolveMT can achieve higher MT quality by exploring a larger pool of MT systems from which the best translation can be selected. Hence, the M axMT s parameter can be adjusted to achieve the desired cost-quality trade-off. Notably, after only a few hundred Machine Translation (MT) requests from the total dataset, the EvolveMT algorithm demonstrates convergence towards an upper limit of its weighted F1score, which depends on the parameter *maxMT s*. Figure 2 shows the confusion matrix between the outputs of EvolveMT (with *M axMT s* = 4) and ![3_image_0.png](3_image_0.png) the COMET-QE ensemble after 100 translations requests. The swift convergence of EvolveMT with a limited number of requests is mainly due to the utilization of XLM-RoBERTa embeddings that have been fine-tuned specifically for the COMET-QE task. This exemplifies the model's effectiveness, as it begins with no prior knowledge, and within a few hundred requests, it can converge and approach the performance of the COMET-QE ensemble. It is crucial to mention that the results presented in Table 1 encompass the "warm up" phase where EvolveMT starts from zero knowledge until full convergence is achieved. If this phase were excluded, the COMET-DA scores of EvolveMT would likely be even higher. ## 5 Discussion The cost-benefit analysis of EvolveMT highlights the trade-off between run-time efficiency and training expenses. While the run-time cost of EvolveMT may be higher than that of Best MT, it does not require the extensive and time-consuming training process required for traditional MT ensemble methods. This training process involves obtaining translations from all MTs and scoring them using references generated by annotators. However, the increased run-time cost of EvolveMT is offset by its ability to achieve superior production quality and adapt to changes in the data domain with a minimum amount of overhead. As the data domain changes, traditional MT ensemble techniques require costly retraining to accommodate the new domain, whereas EvolveMT can adapt | COMET-DA | | | | | |--------------|----------|------------------|-------------------|--------------------| | Model | Cost ($) | English-to-Czech | English-to-German | English-to-Russian | | Best MT (1) | 20.000 | 0.867 | 0.586 | 0.617 | | COMET-QE (6) | 77.000 | 0.900 | 0.605 | 0.658 | | EvolveMT (1) | 12.312 | 0.851 | 0.567 | 0.605 | | EvolveMT (2) | 23.442 | 0.870 | 0.586 | 0.627 | | EvolveMT (3) | 32.358 | 0.878 | 0.591 | 0.637 | | EvolveMT (4) | 39.905 | 0.882 | 0.596 | 0.643 | | EvolveMT (5) | 46.067 | 0.887 | 0.598 | 0.647 | | EvolveMT (6) | 51.095 | 0.887 | 0.599 | 0.651 | ## To Changes With A Few Hundred Mt Requests. 7 Conclusion And Future Work This versatility and adaptability of EvolveMT make it a robust solution for machine translation tasks that may be subject to data variation, as it can adjust to these changes with minimal effort. The cost-benefit analysis results clearly demonstrate that the increased run-time cost of EvolveMT is outweighed by its high performance and adaptability in the face of changing data domains. ## 6 Limitations The performance of the EvolveMT system is contingent upon the reliability of the COMET-QE model in providing accurate labels for the MT requests. Utilizing the encoder's embeddings as features necessitates that the COMET-QE model performs effectively on blind MT requests. The batch reranking of MT requests after each learning step may result in a computational bottleneck if the queue size is substantial. To mitigate this issue, an asynchronous re-ranking process could be implemented, whereby the queue is only reorganized once the re-ranking is completed. Additionally, before the re-ranking process, a diverse subset of the queue can be selected based on the XLMRoBERTa embeddings, which reflect the novelty of the requests relative to previously processed MT requests. The source embeddings from the XLMRoBERTa model can be cached in parallel during the batch feature extraction process utilizing GPU capabilities, thus facilitating efficient COMET-QE inference. EvolveMT could also be optimized for cost-effectiveness by incorporating the cost of each MT in the ensemble into the algorithm. This study presents a novel approach called EvolveMT for ensembling machine translation (MT) engines, focusing on minimizing the number of engines required to be queried to achieve optimal quality. To evaluate the efficacy of the proposed method, a series of experiments were conducted, wherein EvolveMT was implemented with varying levels of granularity in terms of the maximum number of engines permitted for each individual MT request. The quantitative results of the experiments indicate that, compared to the traditional method of querying all available MT engines, EvolveMT offers a more cost-effective solution for the ensembling process without compromising the quality of the resulting translations. EvolveMT presents a unique advantage in terms of cost efficiency compared to COMET-QE Ensemble. This is achieved by utilizing a stochastic exploration approach that selectively queries additional MT engines based on predicted probabilities, which are also employed in an active-learning framework by re-ranking MT requests after each learning step. Furthermore, unlike traditional MT ensemble techniques, EvolveMT can adapt in realtime to changes in customers' translation requests, without incurring the cost of acquiring human references or undergoing costly re-training or finetuning. In conclusion, this paper presents four significant contributions to the field of machine translation: (1) the introduction of the first self-improving MT system that operates without the need for human feedback; (2) the capability of adaptively optimizing the MT ensemble in response to production environment translation requests through online machinelearning; (3) the development of a novel approach for selectively querying MT engines rather than relying on translations from all available engines; and (4) the implementation of an active-learning framework that leverages uncertainties from the ensemble for batch translation. ## 8 Ethics And Impact Statement EvolveMT is a high-quality machine translation (MT) system for individuals or organizations. It can improve translation accuracy if validated on a specific MT corpus. EvolveMT is trained from scratch for each customer or project, eliminating biases in the algorithm but may still present biases in the quality estimation metric or training dataset. The system is self-adaptable, secure, and protects user privacy by deleting data immediately after translation. EvolveMT eliminates the need for re-training and re-hypertuning, reducing computational costs and being environmentally friendly. The only potential harm is to linguists who perform post-editing as it reduces their dependence on references or evaluations. ## References Mikko Aulamo and Jörg Tiedemann. 2019. The OPUS resource repository: An open package for creating parallel corpora and machine translation services. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 389–394, Turku, Finland. Linköping University Electronic Press. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. *arXiv* preprint arXiv:1911.02116. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, errors, and context: A large-scale study of human evaluation for machine translation. *Transactions of the Association for Computational Linguistics*, 9:1460–1474. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondˇrej Bojar. 2021b. Results of the wmt21 metrics shared task: Evaluating metrics with expert-based human evaluations on ted and news domain. In *Proceedings of the Sixth Conference on Machine Translation*, pages 733–774. Lifeng Han, Irina Sorokina, Gleb Erofeev, and Serge Gladkoff. 2021. cushlepor: customising hlepor metric using optuna for higher agreement with human judgments or pre-trained language model labse. In Proceedings of the Sixth Conference on Machine Translation, pages 1014–1023. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In *Proceedings of the Tenth* International Conference on Language Resources and Evaluation (LREC'16), pages 923–929, Portorož, Slovenia. European Language Resources Association (ELRA). Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ondˇrej Bojar. 2020. Results of the wmt20 metrics shared task. In *Proceedings of the Fifth Conference on Machine Translation*, pages 688–725. Vânia Mendonça, Ricardo Rei, Luisa Coheur, and Alberto Sardinha. 2022. Onception: Active learning with expert advice for real world machine translation. arXiv preprint arXiv:2203.04507. Jason Naradowsky, Xuan Zhang, and Kevin Duh. 2020. Machine translation system selection from bandit feedback. *arXiv preprint arXiv:2002.09646*. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 101–108, Online. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Michal Štefánik, Vít Novotny, and Petr Sojka. 2021. ` Regressive ensemble for machine translation quality evaluation. *arXiv preprint arXiv:2109.07242*. Jörg Tiedemann, Mikko Aulamo, Daria Bakshandaeva, Michele Boggia, Stig-Arne Grönroos, Tommi Nieminen, Alessandro Raganato, Yves Scherrer, Raul Vazquez, and Sami Virpioja. 2022. Democratizing machine translation with opus-mt. Chi Wang, Qingyun Wu, Markus Weimer, and Erkang Zhu. 2021. Flaml: a fast and lightweight automl library. *Proceedings of Machine Learning and Systems*, 3:434–447.
ding-etal-2023-static
A Static Evaluation of Code Completion by Large Language Models
https://aclanthology.org/2023.acl-industry.34
Large language models trained on code have shown great potential to increase productivity of software developers. Several execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the other hand, static analysis tools such as linters, which can detect errors without running the program, haven{'}t been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with execution-based evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by language models. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions.
# A Static Evaluation Of Code Completion By Large Language Models Hantian Ding, Varun Kumar, Yuchen Tian, Zijian Wang, Rob Kwiatkowski, Xiaopeng Li, Murali Krishna Ramanathan, **Baishakhi Ray**, Parminder Bhatia, Sudipta Sengupta, Dan Roth, **Bing Xiang** AWS AI Labs {dhantian, kuvrun, tiayuche, zijwan, robkwiat, xiaopel mkraman, rabaisha, parmib, sudipta, drot, bxiang}@amazon.com ## Abstract Large language models trained on code have shown great potential to increase productivity of software developers. Several executionbased benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems. Nevertheless, it is expensive to perform the same evaluation on complex real-world projects considering the execution cost. On the contrary, static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models. In this work, we propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees. Compared with executionbased evaluation, our method is not only more efficient, but also applicable to code in the wild. For experiments, we collect code context from open source repos to generate one million function bodies using public models. Our static analysis reveals that Undefined Name and Unused Variable are the most common errors among others made by language models. Through extensive studies, we also show the impact of sampling temperature, model size, and context on static errors in code completions. ## 1 Introduction Automatic code completion by large language models trained on numerous code repositories has demonstrated great potential in accelerating software development. Code assistant services powered by these models provide developers with code suggestions following the current context in realtime. However, it has been shown that about 70% of the suggestions are discarded by users in a recent study (Ziegler et al., 2022). Even worse, misleading recommendations can lead to failure in completing programming tasks (Vaithilingam et al., 2022). Therefore, it is important to understand the weakness of current code generation models through comprehensive evaluation and analysis. $$\begin{array}{r l}{\_{\mathrm{init}}}&{{}(\mathrm{self}):}\\ {\_{\mathrm{"""Define}}\_{\mathrm{layer}}}\\ {\_{\mathrm{upper}}\,(\mathrm{Model},\_{\mathrm{set}})}\end{array}$$ ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) ![0_image_2.png](0_image_2.png) Figure 1: A function completion example, with an Unused Variable error (gray) in context, and an Undefined ![0_image_3.png](0_image_3.png) Name error (red) in completion. Recently, execution-based evaluation has become increasingly popular, where model-generated code is executed with unit tests to check functional correctness. Several benchmarks have been proposed along this direction, such as HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), MBXP (Athiwaratkun et al., 2022), CodeContests (Li et al., 2022), and DS-1000 (Lai et al., 2022). Although these benchmarks are highly reliable and accurate, they only focus on well-defined algorithmic and data science problems, which do not reflect the need in general software development. Running execution-based evaluation with real-world codebases is, however, prohibitively expensive because each project requires a different setup and the computation cost is potentially unbounded. In contrast to the execution-based approach, static program analysis (or *static analysis*) can analyze programs without executing them. Although static analysis is usually unable to determine functional correctness, it covers a large collection of 347 static error types, such as undefined names or unused variables that are illustrated in Figure 1. More importantly, the analysis can be very fast and does not require any project specific environment setup, which allows us to evaluate model completions for complex real-world code at large scale. Static analysis tools such as linters have been widely used, for example in code editors, to examine human-written code, but their value in evaluating code generation models has not been well explored yet. In this work, we propose a static evaluation framework for Python language. Code snippets are first parsed into Abstract Syntax Trees (ASTs) and then analyzed by Pyflakes1, a popular static analysis tool for Python. To simulate real-world use cases of auto completion, we collect code from public Github repositories to build a function completion dataset of 100K problems. In each problem, we randomly mask out a function body in a Python file and ask the model to complete it given the preceding context up until the function header. We then evaluate public models by sampling 10 completions for each problem, resulting in one million generations for each model and sampling temperature, which will be examined by our static evaluation pipeline. During AST parsing, we find most of the errors arise from incomplete generations that hit the max length limit. Otherwise, models of all sizes perform quite well in producing parsable codes. Moving forward, Pyflakes analysis reveals that Undefined Name and Unused Variable are the most prominent static errors in model-generated code. We also observe higher temperatures consistently lead to more errors. Scaling up the model, while able to reduce errors of many types, do not show a clear benefit for preventing undefined names. Through a more fine-grained classification, we find larger models generate fewer undefined variables but more undefined methods, which add up to a mixed result. Finally, we demonstrate that errors in context can lead to errors of the same type in generation, which is likely a consequence of large language models' in context learning capability. In summary, our main contributions include the following. (1) We propose a static evaluation framework for code completion. (2) Our evaluation on public models reveals common static errors and how they are impacted by various factors such as temperature, model size, and context. ## 2 Background Code Generation with Transformers Over recent years, it has become increasingly popular to train Transformer-based language models on source code (Feng et al., 2020; Ahmad et al., 2021; Wang et al., 2021; Lu et al., 2021; Guo et al., 2022) to support software engineering tasks (Iyer et al., 2018; Tufano et al., 2019). In particular, several decoder-only transformer models have been developed to facilitate code generation, such as Codex (Chen et al., 2021), CodeGen (Nijkamp et al., 2022), Incoder (Fried et al., 2022), and AlphaCode (Li et al., 2022). These pretrained causal language models can be used to predict the continuation of input code without any finetuning. Abstract Syntax Tree An Abstract Syntax Tree (a.k.a., AST) is used to represent a source code in a concise tree form. By discarding unnecessary details of the underlying code and its corresponding parsed tree, AST only presents the main structural content of the source code following the language grammar (Aho et al., 2007). Static Analysis Static analysis is a common way to detect software bugs without executing the program (Ayewah et al., 2008; Chess and McGraw, 2004; Chess and West, 2007; Zheng et al., 2006). Static analyzers tend to detect bugs by analyzing the static code text, its AST, documentation, etc. The users usually need to specify the error patterns and static analyzers use different AST, graph, and path analysis to find those patterns in the code. There are a plethora of static analysis tools and they can detect a wide range of errors depending on the specified patterns (Emanuelsson and Nilsson, 2008). For example, Linter is a popular tool that checks for coding style errors and thus, tries to enforce a coding standard (Van Oort et al., 2021). ## 3 The Function Completion Dataset We introduce the *function completion* task, which is one of the most important use cases of auto completion services. Given an input code snippet that ends with a function signature plus an optional docstring, the model is asked to generate the function body. Previous works on code completion (Lu et al., 2021; Svyatkovskiy et al., 2019) have mainly focused on single-line completion. However, a single line is often too short to reveal models' capability in writing syntactically correct code. We believe function, as the fundamental building block in most programming languages, better serves this purpose. ![2_image_0.png](2_image_0.png) Software developers use code generation models as black-box services on a diverse set of coding projects. To better simulate the real-world scenario, we build an evaluation set by sampling from public Github repositories. Specifically we collected permissively licensed Python code in repositories that were created between April, 2022 and August, 2022. The selection criterion precludes any chronological overlap between our evaluation data and the training data of models to be tested in this work.2 The collected Python codes are reformatted as function completion problems. We first use treesitter3to parse the whole file to identify all the functions. Then a function that contains a docstring is randomly selected. The code from the beginning of the file up until the end of the docstring is used as the context, and the function body is considered as the groundtruth. The rest of the file is discarded. At test time, we prompt the model with the context part as input, and let the model generate the function body. We choose only functions with docstrings so that context is well-defined and model can generate meaningful code completions. We further select test samples whose context length is between 64 and 768 tokens, and groundtruth length is shorter than 256 tokens, to match our model generation setting. Our final evaluation set consists of 100K function completion problems. ## 4 Static Error Analysis We propose an evaluation pipeline to detect errors in function completions generated by models, illustrated in Figure 2. Suppose the model generates a completion x given the input context c. We cannot 2CodeGen models were trained on data up until Oct, 2021. 3https://tree-sitter.github.io/tree-sitter/ directly analyze x which is partial code without context. Meanwhile, c may also contain errors especially in real-world cases. Therefore, we perform our analysis in two passes. We first check c for any errors in the input that need to be excluded, and then do another pass on the full code (*c, x*), the concatenation of the context and model completion. Any error that is identified in (*c, x*) but not in c must arise from x, or in other words, be generated by the model. More specifically, we conduct the following two steps of analysis for Python code. ## 4.1 Ast Parsing In the first step, we parse both c and (*c, x*) into abstract syntax trees using Python's native ast module. If the code is parsable, an AST will be returned. Otherwise, a syntax error is captured. Based on the parsing outcomes, we take the following actions: 1. If c is not parsable, we are unable to conclude any error in generation. Empirically this rarely happens, as we will show in the next section. 2. If c is parsable but (*c, x*) is not, then we can confirm the reported syntax error is caused by model generation. However, notice that only one error will be returned even if there are multiple, due to the nature of AST parsing. 3. If both c and (*c, x*) are parsable, there's no AST error in model generation. The ASTs will be used for static analysis in the next step. ## 4.2 Static Analysis With Pyflakes If both c and (*c, x*) can be parsed into ASTs, we perform static analysis using Pyflakes. Pyflakes is a static analysis tool that checks a Python source file for errors by examining the AST. One advantage is that the analysis does not rely on dependencies of the source file, which is important given the diversity of packages used in real-world code. We run Pyflakes on c and (*c, x*) to identify errors in context and in full code. Errors that are detected in (*c, x*) but not in c are considered as introduced by model completion. ## 5 Experiments With the proposed pipeline we conduct error analysis for CodeGen models (Nijkamp et al., 2022) on the test set described in Section 3, and present the analysis results. ## 5.1 Experiment Setup We evaluate CodeGen-mono models of all sizes, ranging from 350M to 16B. We generate function completions using nucleus sampling with top-p 0.95. Sampling temperature is varied between 0.2 and 0.8 for the 2B model, and fixed to 0.4 for the rest models. We sample 10 generations for each problem, which results in one million code completions for each model and temperature. The maximum generation length is 256 tokens. Generated code completions are then passed to our static evaluation pipeline built with Python 3.8 and Pyflakes 3.0.1. Evaluating one million generations takes only a few hours on a single CPU thread, and can be fully parallelized for acceleration. ## 5.2 Validation Of Model Output While we mainly focus on static errors in this study, it is also important to validate that the models do generate relevant code. A counter-example would be to generate a single line of "return" for every function signature, which is syntactically correct but not meaningful at all. Towards this end, we calculate the edit similarity between model generation and groundtruth, and compare against Pass@1 from HumanEval (Chen et al., 2021) which is a popular execution-based benchmark to evaluate code generation models. Specifically, for both datasets we generate 10 samples per problem, and report the averaged edit similarity or pass rate over all generations. As shown in Table 1, models of all sizes and temperatures are able to achieve reasonable edit similarity on the function completion dataset, which means the generations are semantically relevant. Moreover, edit similarity and HumanEval Pass@1 both improve as the model scales up, highlighting that model scale is crucial for accurate | Model | Temp | Edit | HumanEval | |--------------|--------|--------|-------------| | Similarity | Pass@1 | | | | CodeGen-16B | 72.07 | 31.83 | | | CodeGen-6B | 68.76 | 26.46 | | | CodeGen-2B | 64.83 | 23.72 | | | CodeGen-350M | 56.47 | 12.62 | | | 0.4 0.2 | 65.10 | 25.06 | | | 0.4 | 64.83 | 23.72 | | | 0.6 | 64.09 | 21.28 | | | 0.8 | 62.62 | 17.56 | | | CodeGen-2B | | | | code generation. Finally, the strong positive correlation between the last two columns shows that edit similarity on the function completion dataset can be used as an alternative metric for model comparison. ## 5.3 Ast Results We run AST parsing and find there are only 0.42% cases with unparsable context that need to be discarded. For the rest, we report percentage of generations with AST errors in Table 2. A full list of error types is included in Appendix A. For each type, we also show a code example in Appendix B. While there are about 7-8% of unparsable generations, most of the parsing errors happen at the end of file (EOF), which means the generated code is incomplete due to the 256 max token limit. Extending generation length may help reduce EOF errors, but will require more computation and increase the perceived latency of the auto-completion service. On the other hand, non-EOF errors only account for a tiny fraction, usually around 0.1-0.2%, which indicates CodeGen models can generally follow the abstract syntax grammar to produce parsable codes, regardless of model size and temperature. Finding 1. *Codes generated by models, unless incomplete, are mostly parsable into ASTs, regardless* of model size or temperature. We also show the top-3 non-EOF error types ranked by frequency, which are **Invalid syntax**, Print Missing Parentheses, and **Keyword Argument Repeated**. Notably, the first two categories are often related to Python's interpreter version. To illustrate, Python2-style print like *print "abc"* Keyword Argument Repeated CodeGen-16B Model Temp Total EOF Non EOF Invalid Syntax "print" Missing Parentheses 7.330% 7.236% 0.094% 0.042% 0.041% 0.004% CodeGen-6B 7.446% 7.253% 0.193% 0.081% 0.094% 0.006% CodeGen-2B 7.272% 7.177% 0.095% 0.052% 0.018% 0.008% CodeGen-350M 8.703% 8.593% 0.110% 0.041% 0.016% 0.028% | CodeGen-16B Model | Temp | Total | EOF | Non EOF | Invalid | |---------------------|--------|---------|-------|-----------|-----------| | 0.4 | | | | | | | CodeGen-2B | | | | | | 0.2 8.067% 7.982% 0.085% 0.045% 0.018% 0.008% 0.4 7.272% 7.177% 0.095% 0.052% 0.018% 0.008% 0.6 6.823% 6.713% 0.110% 0.060% 0.020% 0.008% 0.8 7.496% 7.337% 0.159% 0.085% 0.029% 0.014% ![4_image_0.png](4_image_0.png) will lead to Print Missing Parentheses in Python3. Another example is that using *async* as a variable name will cause Invalid Syntax because *async* has become a reserved word since Python3.7. Models learn to make such errors from their training data which consists of code written for different Python versions. In many cases, it is difficult for a model to infer the intended interpreter version directly from the limited context. An interesting future direction is to guide models to generate version-compatible code given the target environment. Finding 2. Interpreter version mismatch is one of the major reasons for non-EOF AST errors. ## 5.4 Pyflakes Results We present frequencies of top 6 linter errors from Pyflakes in Table 3, with code examples in Appendix B. While Pyflakes also finds other problems in code, most of them are very sparse and thus less important, which we leave to Appendix A. Notice that one code snippet may contain multiple errors. We count each type only once in every test sample. Among all errors, **Undefined Name** and **Unused Variable** are the most common ones, where the model either calls a variable that is not defined, or defines a variable but never uses it. Closely related are Unused Import, **Redefined While Unused** and **Undefined Local**, which can be considered as special cases of the first two. Models also sometimes unnecessarily use f-strings by not giving any placeholder. It is worth pointing out that not all Pyflakes errors will impact execution. In fact among the six types, only Undefined Name and Undefined Local may cause runtime problems. However, all these errors can harm readability and maintenance which are critical for software development. Hence, it is important to address them to improve the quality of auto code completion. Across sampling temperatures, we observe in every column that more errors are generated under higher temperatures, which is expected because generations in such cases are less confident. Finding 3. Higher temperature always leads to more errors of every type. The impact of model size on error rate is less consistent though. For Unused Variable, Unused Import, and Undefined Local, error rate does decrease as the model scales up. However, the other three categories do not manifest such correlation. We investigate the underlying reason for this mixed result particularly in the case of Undefined Name. Notice that if an undefined name is a *function call*, it can potentially be defined afterwards outside the current function completion scope. While not guaranteed, the model might be able to fix this error by itself if we allow generating longer code instead of only one function. In contrast, using a *variable* without first defining it is usually a mistake. Even in some rare cases where the variable definition is made up correctly after the usage, such ordering is often less preferred in terms of coding Model Temp Undefined Name Unused Variable FString Missing Placeholders Unused Import Redefined While Unused Undefined Local | Model | Temp | Undefined | Unused | |-------------|----------|-------------|----------| | Name | Variable | | | | CodeGen-16B | 0.4 | | | | CodeGen-2B | | | | CodeGen-6B 4.374% 1.775% 0.089% 0.149% 0.126% 0.055% CodeGen-2B 4.364% 1.810% 0.147% 0.150% 0.146% 0.065% CodeGen-350M 4.472% 2.032% 0.151% 0.173% 0.155% 0.095% 0.2 4.206% 1.751% 0.125% 0.139% 0.139% 0.067% 0.4 4.364% 1.810% 0.147% 0.150% 0.146% 0.065% 0.6 4.711% 2.000% 0.188% 0.170% 0.159% 0.076% 0.8 5.377% 2.490% 0.240% 0.247% 0.184% 0.086% 4.323% 1.729% 0.135% 0.107% 0.131% 0.047% Table 3: Percentages of Pyflakes errors across different model sizes and temperatures. Higher temperatures always lead to more errors in every category. On the other hand, larger models do not necessarily generate fewer errors. style. In Figure 3, we break down the undefined names into *variables* and *functions*. We find that larger models yield fewer undefined variables, but more undefined functions, which demonstrates that the correlation between error count and model size varies for different errors types. Finding 4. *While larger models are more accurate* code generators (Nijkamp et al., *2022), scaling* up model size does not lead to reduction in error counts for all error categories. ## 5.5 Correlation With Errors In Context We further study the correlation between errors in context and in generation. Denote by c the input context, x the model generation, e the error type. We write e ∈ c to mean c contains an error of type e. For every e, 4 we calculate P(e ∈ x|e ∈ c), the generation error rate when context contains the same type of error(s). We also report the relative ratio P(e∈x|e∈c) P(e∈x|e /∈c) to measure the impact of context. From Table 4, if the model observes errors in context, it is more likely to produce the same type of errors in generation, and the error rate can be amplified by 7∼200 times depending on the type. This is possibly an undesired consequence of the in-context learning capability of large language models. We also calculate P(e ∈ c|e ∈ x) to show how many of the generation errors co-occur with context errors. As indicated by the last column of Table 4, even though context errors can significantly amplify generations errors, the co-occurrences of two do not account for a large fraction. This implies problematic context is not the only factor for problematic generation, and it is often the case for models to produce errors even with correct context. ## 6 Discussion | P(e∈x|e∈c) | | | | |------------------------------|------------------------------------|--------|--------| | Error type | P(e∈x|e∈c) P(e∈x|e /∈c) P(e∈c|e∈x) | | | | Undefined Name | 26.33% | 7.80 | 25.99% | | Unused Variable | 14.13% | 8.45 | 8.56% | | FString Missing Placeholders | 20.63% | 215.50 | 35.08% | | Redefined | | | | | While Unused | 2.44% | 21.16 | 22.30% | | Undefined Local | 7.00% | 108.68 | 1.08% | We present a static evaluation framework for code completions generated by large language models. By utilizing the proposed framework, we conduct error analysis of CodeGen models on a large scale real-world Python evaluation set. Our experiment reveals common static errors made by pretrained models, as well as their frequency trend across model sizes and sampling temperatures. By pointing out weaknesses of existing models, we hope our study also sheds light on future directions towards more accurate code generation. There are a few limitations of this study. First, we focus on left-to-right code generation without considering right-side and cross-file context, which can be used to determine broader categories of errors with improved precision. Second, each static analysis tool has its own limitations. Thus, the presented analysis is limited by Pyflakes's accuracy and coverage to detect certain code issues. ## References Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2655–2668, Online. Association for Computational Linguistics. Alfred V Aho, Ravi Sethi, and Jeffrey D Ullman. 2007. Compilers: principles, techniques, and tools, volume 2. Addison-wesley Reading. Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, and Bing Xiang. 2022. Multi-lingual evaluation of code generation models. *CoRR*, abs/2210.14868. Jacob Austin, Augustus Odena, Maxwell I. Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. 2021. Program synthesis with large language models. *CoRR*, abs/2108.07732. Nathaniel Ayewah, William Pugh, David Hovemeyer, J David Morgenthaler, and John Penix. 2008. Using static analysis to find bugs. *IEEE software*, 25(5):22– 29. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. *CoRR*, abs/2107.03374. Brian Chess and Gary McGraw. 2004. Static analysis for security. *IEEE security & privacy*, 2(6):76–79. Brian Chess and Jacob West. 2007. *Secure programming with static analysis*. Pearson Education. Pär Emanuelsson and Ulf Nilsson. 2008. A comparative study of industrial static analysis tools. Electronic notes in theoretical computer science, 217:5–21. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A pre-trained model for programming and natural languages. In *Findings of the Association* for Computational Linguistics: EMNLP 2020, pages 1536–1547, Online. Association for Computational Linguistics. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. CoRR, abs/2204.05999. Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. 2022. UniXcoder: Unified crossmodal pre-training for code representation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7212–7225, Dublin, Ireland. Association for Computational Linguistics. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643–1652, Brussels, Belgium. Association for Computational Linguistics. Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida I. Wang, and Tao Yu. 2022. DS1000: A natural and reliable benchmark for data science code generation. *CoRR*, abs/2211.11501. Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, PoSen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with alphacode. *CoRR*, abs/2203.07814. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin B. Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, and Shujie Liu. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. In *Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1,* NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. Codegen: An open large language model for code with multi-turn program synthesis. arXiv preprint. Alexey Svyatkovskiy, Ying Zhao, Shengyu Fu, and Neel Sundaresan. 2019. Pythia: Ai-assisted code completion system. In *Proceedings of the 25th ACM* SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pages 2727–2735. ACM. Michele Tufano, Cody Watson, Gabriele Bavota, Massimiliano Di Penta, Martin White, and Denys Poshyvanyk. 2019. An empirical study on learning bugfixing patches in the wild via neural machine translation. *ACM Trans. Softw. Eng. Methodol.*, 28(4). Priyan Vaithilingam, Tianyi Zhang, and Elena L. Glassman. 2022. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA '22, New York, NY, USA. Association for Computing Machinery. Bart Van Oort, Luís Cruz, Maurício Aniche, and Arie Van Deursen. 2021. The prevalence of code smells in machine learning projects. In 2021 IEEE/ACM 1st Workshop on AI Engineering-Software Engineering for AI (WAIN), pages 1–8. IEEE. Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. 2021. CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation. In *Proceedings of the 2021* Conference on Empirical Methods in Natural Language Processing, pages 8696–8708, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiang Zheng, Laurie Williams, Nachiappan Nagappan, Will Snipes, John P Hudepohl, and Mladen A Vouk. 2006. On the value of static analysis for fault detection in software. *IEEE transactions on software* engineering, 32(4):240–253. Albert Ziegler, Eirini Kalliamvakou, Shawn Simister, Ganesh Sittampalam, Alice Li, Andrew Rice, Devon Rifkin, and Edward Aftandilian. 2022. Productivity assessment of neural code completion. ## A Full Error Categories 11. string dot format extra positional argument In addition to those discussed in Section 5, we list all error categories that can be detected in model generated code in our experiments, with a minimal frequency of 0.001% by any of the models (i.e. 10 observations out of the total 1 million generations). 12. multi value repeated key literal 13. percent format positional count mismatch 14. tab error 15. string dot format extra named arguments ## Ast Errors (Eof Errors Indicated By Asterisk): 1. *unexpected EOF while parsing 2. *EOL while scanning string literal 3. *invalid syntax at EOF 4. *EOF while scanning triple-quoted string literal 5. invalid syntax not at EOF 6. missing parentheses in call to "print" 7. keyword argument repeated 8. leading zeros in decimal integer literals are not permitted; use an o prefix for octal integers 9. unmatched ")" 10. cannot assign to function call 11. positional argument follows keyword argument 12. expression cannot contain assignment ## Pyflakes Issues: 1. undefined name 2. unused variable 3. f-string missing placeholder 4. unused import 5. redefined while unused 6. indentation error 7. import shadowed by loop var 8. raise not implemented 9. invalid print syntax 10. is literal 16. import star not permitted 17. percent format unsupported format character 18. assert tuple 19. percent format extra named arguments ## B Examples For Top Error Types Below we list one code example for each of the error categories shown in Table 2 and 3. Following the definition of function completion task, in every example, context is from the beginning until the end of the docstring of the last function, and model completion is the body of the last function. 1 """ Secondary Structure dataset . """ 2 3 import numpy as np ![9_image_0.png](9_image_0.png) 7 9 def __init__ ( self , 10 name : str , 11 datapaths , 12 tokenizer , 15 16 """ 19 25 28 29 # Truncate and pad labels , Listing 1: unexpected EOF while parsing (line 31) 1 from pathlib import Path 2 from wasabi import msg 3 from . remote_storage import RemoteStorage 4 from . remote_storage import get_content_hash , get_command_hash 5 from .. _util import load_project_config 6 from .. _util import project_cli , Arg , logger 7 8 9 @project_cli . command (" push ") 10 def project_push_cli ( 11 \# fmt : off 12 remote : str = Arg (" default ", help =" Name or path of remote storage ") , 13 project_dir : Path = Arg ( Path . cwd () , help =" Location of project directory . Defaults to current working directory .", exists = True , file_okay = False ) , 14 \# fmt : on 15 ) : 16 """ Persist outputs to a remote storage . You can alias remotes in your 17 project .yml by mapping them to storage paths . A storage can be anything that 18 the smart - open library can upload to , e.g. AWS , Google Cloud Storage , SSH , 19 local directories etc . 20 21 DOCS : https :// xxx 22 """ 23 for nlp in load_project_config ( project_dir , {" directories ": [ ANIMAL_TRAIN_DIR ]}: 24 remote_storage = RemoteStorage . get ( remote ) 25 for command in [" train "]: 26 logger . debug ( f" Uploading { command } to remote storage '{ remote_storage } '") 27 path = Path ( project_dir ) / ANIMAL_TRAIN_DIR / command 28 upload_project ( remote_storage , path ) Listing 2: invalid syntax (line 23) 1 def prod (F , E ) : 2 """ Check that the factorization of P -1 is correct . F is the list of 3 factors of P -1 , E lists the number of occurrences of each factor .""" 4 M = prod_of_prime_factors (F , E ) 5 if not all ( i == 1 for i in M ) : 6 print " Error in prod " 7 print F , E 8 return 9 P = product ( F ) 10 P_1 = 1 11 for i in range ( len ( F ) ) : 12 P_1 *= F [ i ]** E [ i ] 13 if P != P_1 : 14 print " Error in prod " 15 print F , E 16 print P 17 print P_1 18 return Listing 3: Missing parentheses in call to "print" (line 6) ![11_image_0.png](11_image_0.png) 3 5 7 8 11 1 """ 2 This program will continually ask our user to give a number 3 and will calculate the factorial result of the number and print it on the console . 4 ![11_image_1.png](11_image_1.png) 6 """ 7 9 10 12 """ 14 inputs . 15 """ 21 else : 22 main () Listing 5: undefined name "factorial" (line 18) 1 def check ( full_path , encoding ) : 2 assert type ( full_path ) == str , f'\ ' full_path \ ' is of { type ( full_path )}. Only type \ ' str \ ' is acceptable .' 3 assert full_path != "", "\ ' full_path \ ' is empty ." 4 assert type ( encoding ) == str , f'\ ' full_path \ ' is of { type ( encoding )}. Only type \ ' str \ ' is acceptable .' 5 assert encoding != "", "\ ' encoding \ ' is empty ." 6 7 def file_read ( full_path : str , encoding = " utf8 ") : 8 ''' 9 Author : xxx 10 11 Reads file at " full_path " and returns its data in a list . 12 ''' 13 14 check ( full_path , encoding ) 15 encoding_check = encoding 16 full_path = full_path . strip () 17 f = open ( full_path , "r", encoding = encoding ) 18 lines = f . readlines () 19 f . close () ![12_image_0.png](12_image_0.png) 21 return lines Listing 6: local variable "encoding_check" is assigned to but never used (line 15) 1 import os 2 import json 3 4 from convinse . library . utils import store_json_with_mkdir , get_logger 5 6 7 class HeterogeneousAnswering : 8 def __init__ ( self , config ) : 9 """ Initialize HA module .""" 10 self . config = config 11 self . logger = get_logger ( __name__ , config ) 12 13 def train ( self , sources =["kb", " text ", " table ", " info "]) : 14 """ Method used in case no training required for HA phase . """ 15 self . logger . info ( f"No need to train .") 16 pass Listing 7: f-string is missing placeholders (line 15) 1 import os 2 import urllib . parse 3 import sqlite3 4 5 SQL = """ 6 SELECT p. ZAUTHOR , p. ZTITLE , e.ZTITLE , e. ZASSETURL , e. ZPUBDATE 7 from ZMTEPISODE e 8 join ZMTPODCAST p 9 on e. ZPODCASTUUID = p. ZUUID 10 where ZASSETURL NOTNULL ; 11 """ 12 13 14 def check_imports () : 15 ''' Prompts for password to install dependencies , if needed ''' 16 import os , importlib , importlib . util 17 import urllib . parse 18 19 \# Check for dependency installs 20 \# Can be done more simply , but this way I can avoid importing anything from zmodel , 21 \# which is nice since I can see what 's going on. 22 for k , v in DEPS . items () : 23 try : 24 importlib . import_module ( k ) 25 except ImportError as e : 26 importlib . util . find_spec ( k ) 27 if importlib . util . find_spec ( k ) is None : 28 os . system ( f'pip install {v}') Listing 8: "urllib.parse" imported but unused (line 17) 1 import kfp . deprecated as kfp ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) 2 from kfp . deprecated import components , dsl , compiler 3 4 def get_run_info ( run_id : str ) : 5 """ Example of getting run info for current pipeline run . """ 6 import kfp . dsl as dsl 7 client = kfp . Client () 8 run = client . run_details ( run_id ) 9 print ( f"Run details :\n{run }") 10 print ( f" Pipeline details :\n{run. pipeline_runtime }") Listing 9: redefinition of unused "dsl" from line 2 (line 6) 1 """ Check for nonlocal and used - before - assignment """ 2 \# pylint : disable = missing - docstring , unused - variable , no -init , too -few - public - methods 3 4 __revision__ = 0 5 6 def test_ok () : 7 """ uses nonlocal """ 8 cnt = 1 9 def wrap () : 10 nonlocal cnt 11 cnt = cnt + 1 12 wrap () 13 14 def test_fail () : 15 """ doesn 't use nonlocal """ 16 cnt = 1 17 def wrap () : 18 cnt = cnt + 1 \# [used -before - assignment ] 19 wrap () Listing 10: local variable "cnt" defined in enclosing scope on line 16 referenced before assignment (line 18)
ahuja-etal-2023-scalable
Scalable and Safe Remediation of Defective Actions in Self-Learning Conversational Systems
https://aclanthology.org/2023.acl-industry.35
Off-Policy reinforcement learning has been the driving force for the state-of-the-art conversational AIs leading to more natural human-agent interactions and improving the user satisfaction for goal-oriented agents. However, in large-scale commercial settings, it is often challenging to balance between policy improvements and experience continuity on the broad spectrum of applications handled by such system. In the literature, off-policy evaluation and guard-railing on aggregate statistics has been commonly used to address this problem. In this paper, we propose method for curating and leveraging high-precision samples sourced from historical regression incident reports to validate, safe-guard, and improve policies prior to the online deployment. We conducted extensive experiments using data from a real-world conversational system and actual regression incidents. The proposed method is currently deployed in our production system to protect customers against broken experiences and enable long-term policy improvements.
# Scalable And Safe Remediation Of Defective Actions In Self-Learning Conversational Systems Sarthak Ahuja, Mohammad Kachuee, Fateme Sheikholeslami, Weiqing Liu, Jaeyoung Do Amazon Alexa AI, Seattle, WA {sarahuja, kachum, shfateme, lweiqing, domjae}@amazon.com ## Abstract Off-Policy reinforcement learning has been a driving force for the state-of-the-art conversational AIs leading to more natural humanagent interactions and improving the user satisfaction for goal-oriented agents. However, in large-scale commercial settings, it is often challenging to balance between policy improvements and experience continuity on the broad spectrum of applications handled by such system. In the literature, off-policy evaluation and guard-railing on aggregate statistics has been commonly used to address this problem. In this paper, we propose a method for curating and leveraging high-precision samples sourced from historical regression incident reports to validate, safe-guard, and improve policies prior to the online deployment. We conducted extensive experiments using data from a real-world conversational system and actual regression incidents. The proposed method is currently deployed in our production system to protect customers against broken experiences and enable long-term policy improvements. ## 1 Introduction Conversational AI systems such as Apple Siri, Amazon Alexa, Google Assistant, and Microsoft Cortana rely on multiple components for speech recognition, natural language understanding (NLU), skill routing, and generating a response to the user. A skill routing block selects the right skill/provider and NLU interpretation to serve a user's request. Skill routing is a challenging problem due to the number of skills present in a realworld conversational system. Furthermore, new skills are being introduced every day, existing skills may change behavior over time while some others getting deprecated leading to an ever changing customer-skill dynamic (Sarikaya, 2017; Park et al., 2020). To address such challenges, state of the art skill routing systems cast the problem as a reinforcement ![0_image_0.png](0_image_0.png) learning (RL) problem where the agent performs periodic off-policy updates. The RL agent continually improves or self-learns by exploring alternative decisions and learning from the logged customer interaction data (Kachuee et al., 2022). While the RL-based approach has many merits around scalability such as no need for expensive human annotation, it also has a tendency to cause instabilities in the agent's behavior which not only regress user retention and trust, but also manifest as revenue loss for business-critical domains (Kachuee and Lee, 2022; Ke et al., 2022). Any policy update inherently entails a risk of breaking certain current user experience, as each deployment despite improving the overall aggregate performance, may regress on certain subpopulations and edge cases which is not acceptable in a commercial system (Li et al., 2021). Furthermore, the frequent and automated nature of these refreshes proportionately increases this risk for the policy to deviate from its stable state when handling edge cases. Techniques like pre-deployment offline evaluation and constrained optimization are 361 proposed to guardrail against such regressions but are often limited by volatile predefined segmentation of data and metrics that only consider coarse sub-populations (Kachuee et al., 2021, 2022; Hoffman et al., 2014; Balakrishnan et al., 2018). These statistical approaches to learning and evaluation further struggle to let the agent protect, learn and retain knowledge of historical regressions that are self-reported by users. Such incidents are usually characterized as belonging to a narrow traffic segment but of high importance where reward metrics are not very reliable. Typically, to mitigate them, high-recall hot-fixes are deployed to override policy and quickly address the incident as depicted in figure 1. Note that these hot-fixes are often handcrafted rules that are not reliable for guard-railing against recurrence and performing a long-term remediation (Karampatziakis et al., 2019). In this paper we posit that for business-critical user-reported defects it is crucial to consider individual cases so as to learn and gate on the instancelevel behavior directly. In other words, we propose complementing the current learning and evaluation mechanisms operating on aggregate metrics with high-precision instance-level analysis. Herein, we outline a novel architecture that extends RLbased skill-routing to use a set of curated highvalue user-reported defective samples, for guardrailing against re-occurrence and performing longterm remediation to re-onboard those cases to the policy; thereby retiring the hot-fixing rules introduced during the short-term mitigation. A highlevel overview of the proposed system is presented in figure 2. To evaluate the suggested framework, we conducted extensive online and offline experiments using data from a real-world conversational agent. We observe that the proposed approach leads to a high assimilation (> 70%) of the defective traffic back to RL policy i.e. long-term remediation and eventual retirement of the hot-fixes. Further, the deviation percentage in decision replication rate and the expected reward in both offline and online settings indicate that the proposed approach has no statistically significant side-effect on the remaining traffic segments. ## 2 Proposed Method 2.1 Problem Formulation We consider the general formulation for an RL agent characterized by Πθ(a|X) where θ are train- ![1_image_0.png](1_image_0.png) able parameters to specify the action selection distribution for each action a ∈ {1 *. . . T*} conditioned on the current state/context, X. Here, after taking an action, the agent observes a reward denoted by r. The task for the agent is to learn from the experiences collected from the current policy, Π0(a|X), interactions in an off-policy setting, to train a new policy parameterized by θ, Πθ(a|X). Off-policy updates are not always stable and occasionally lead to unsatisfactory decisions (Swaminathan et al., 2016; Joachims et al., 2018; Lopez et al., 2021). These incidents are reported in the form of a handful of samples reproducing the defective action called *regression* samples. Alongside the regression samples, typically, the report is further supplemented with complementary and contrasting samples by the user that convey the desired agent behavior. Such samples are referred to as progression samples here. Collectively we denote the dataset of all such reported regression and progression (R/P) samples across all incidents as DRP . These high value samples are carefully stored with additional meta-data and used in evaluating against their recurrence of these incidents (section 2.2) as well as for their long-term remediation by getting assimilated into the policy (section 2.4). The metadata may contain information such as unique sample identifiers, description of the issue, type of the sample (i.e. regression or progression), severity of the corresponding incident, date which the sample was reported, and the current life-cycle status of the sample (i.e. deprecated or active). Remediation involves providing supervision signals for policy updates which is a non-trivial and time-consuming process. Meanwhile, to immediately mitigate business impact from an incident, hot-fixing is usually employed by introducing handcrafted rules on the problematic segment. The set of hand-crafted rules from all incidents reported in a time period, define an eligibility criteria, G(Πθ, X) that decides based on the input sample X and the associated policy Πθ, if an input sample is eligible for the RL policy or should be handled by the hand-crafted rules. We use the notation G(Πθ, X) ∈ {0, 1} to represent the logic that returns one if a sample should be handled by Πθ, or zero if should be redirected to hot-fixes. The set of hot-fixes can be thought of as a separate abstract policy Γ(a|X) that runs on incoming traffic whenever the eligibility criteria G(Πθ, X) is not satisfied: $$\Pi_{\theta}(a|X)=\begin{cases}\Gamma(a|X)&G(\Pi_{\theta},X)=0\\ \Pi_{\theta}(a|X)&\text{otherwise}\end{cases}.\tag{1}$$ ## 2.2 Evaluation The evaluation process starts by replaying the new policy Πθ on the curated samples (*X, a, r*) ∈ {DRP } to get the policy action propensities Πθ(X). Then, we compute the most likely action under the new policy as ba = arg max(Πθ(X)). For progression samples, we report a sample as pass if ba is equal to the logged action a, otherwise it is considered as a *fail* case. Alternatively, for regression samples, it would be considered as a fail if and only if the logged unsatisfactory action was repeated by the new policy. Also, to assign fail/pass certainties for each case, we compute the likelihood of each assignment as Πθ(ba|X) for passed progression or failed regression, and otherwise 1 − Πθ(ba|X). Additionally, we can compute the expected eligibility of a sample given the new policy as: $$\begin{array}{c}{{Q(X):=\mathbb{E}[G(\Pi_{\theta},X)]}}\\ {{=\sum_{i\in1...|a|}G(\Pi_{\theta}(a_{i}|X))\Pi_{\theta}(a_{i}|X)}}\end{array}\tag{2}$$ Intuitively, E[G(Πθ, X)] measures the expected likelihood of handling sample X by policy Πθ rather than a hot-fix. Thus in short, we report the following evaluation metrics for each R/P sample in the evaluation stage: 1. **Expected Eligibility (Q)**: probability that a particular sample will be served by the RL policy given the current state of hot-fixes in place; 0 ≤ P(Q) ≤ 1. 2. **Sample Status Certainty (C)**: confidence on the assigned sample status (PASS/FAIL) based on the evaluation of the policy output for that particular sample; 0 ≤ P(C) ≤ 1. The last step for the evaluation is to generate a report to be used by human operators as well as automated guard-railing (next step) to understand any failures, their certainty, and likelihood of exposing such behavior to the end user. Figure 3 shows an example of such report. ## 2.3 Guard-Railing Hot-fixes introduced for mitigating business impact due to high-severity regression incidents are conditioned on the policy input (X) and the output (Πθ(a|X)). Thus in the event of a subsequent policy refresh, there is always a chance that the associated eligibility criteria G(Πθ, X) for the associated hand-crafted rules gets out-dated and starts to redirect the problematic traffic segments to the RL model. To prevent the recurrence of the regressions, we perform pre-deployment guard-railing right after every policy update using the evaluation parameters defined in section 2.2 For the sample X, assumed at index i of DRP , we perform gating on their intersection probability of the experiment eligibility and sample status certainty P(Ci ∩ Qi) i.e. a sample being eligible for the RL policy with a high certainty of causing a misroute. For failing cases (Ci = FAIL), the best (most lenient) and worst (most strict) case scenario are depicted in figure 4. To prevent any unnecessarily blocks, we use the best case setup ![3_image_0.png](3_image_0.png) Algorithm 1: Guard-railing on a single failing regression/progression sample ![3_image_2.png](3_image_2.png) P(C = F AIL) ∼ P(C) (failure certainty), /* fail guard-railing */ /* pass guard-railing */ when comparing the minimum intersection probability against a set failure threshold Tf . For passing samples (Ci = PASS) we simply invert the sample certainty value and keep the remaining logic as is. Algorithm 1 summarizes the guard-railing logic for the failing case for a single sample. When a guard-rail condition assertion fails, the associated hot-fix is updated by operators to make the guard-rail criteria is met. It should be noted here that adding and updating hot-fixes is only a temporary solution because it takes away traffic from the RL policy and redirects it towards makeshift hand-crafted rules which hampers the scalability of the larger system. It is therefore crucial to start the process of properly assimilating the traffic handled by these rules back to the RL policy after the short-term mitigation. ![3_image_1.png](3_image_1.png) ## 2.4 Remediation As a part of a regular training cycle for off-policy learning, we optimize a loss function L0. For simplicity of explanation, in this paper, we use the inverse propensity scoring (IPS) objective as an example for the case of contextual bandit formulation (Dudık et al., 2014): $$L_{0}=\mathbb{E}_{X,a,r\sim\mathbb{D}}=-r{\frac{\Pi_{\theta}(a|X)}{\Pi_{0}(a|X)}}.\qquad(3)$$ We inject R/P samples in the training loop to the regular training batches and replay them during each iteration. To improve the generalization and data efficiency of using the limited R/P data, we perform representation space data augmentation. This is done on a mini-batch of R/P samples using Gaussian noise injection during the forward pass on each hypothesis at hidden-layer representations as depicted in figure 5. It is further defined in the equation below where x¯ is the hidden space feature vector for hypothesis x, x¯′is the augmented sample vector, j is the feature index and λ is the noise scaling factor. $${\bar{\mathbf{x}}}^{\prime}{}_{j}={\bar{\mathbf{x}}}^{\prime}{}_{j}+\lambda c,c\sim{\mathcal{N}}(0,1)\qquad\qquad(4)$$ The auxiliary loss (LRP ) is computed from the regular loss objective (L0) albeit on augmented data sampled from R/P dataset, DRP , represented as D′RP . When introducing the R/P samples as a part of the training data, we make adjustments such that the added samples discourage action replication for regression cases and encourage replication logged of actions for progression cases. To implement this, we reshape reward values such that regression and progression cases get the lowest and highest possible reward. We represent this reshaped Algorithm 2: Augmented Exp. Replay input :D (dataset of logged interactions from Π0), DRP (dataset of R/P samples), η (train replay loss mix ratio), α (# R/P sample per regular batch), β (# augmentations per R/P sample), λ (noise scaling factor) 1 D ← preprocess(D) 2 DRP ← preprocess(DRP ) 3 D ′RP ← reshapeReward(DRP ) 4 for d in nextBatch(D) do /* sample R/P batch with replacement */ 5 drp = sampleBatch(D ′RP , size = α ∗ β) /* loss on regular data batch */ 6 L0 ← loss(Πθ, d) /* loss on rp data batch */ 7 LRP ← loss(Πθ, drp, noise = λ) /* combine regular and R/P loss */ 8 L ← (1 − η)L0 + (η)L ′ /* use any optimizer f for Πθ */ 9 θ ← f(θ, ∇θL) reward via r′, and the auxiliary loss in equation 5. $$L_{R P}=\mathbb{E}_{X,a,r^{\prime}\sim\mathbb{D}_{R P}^{\prime}}=-r^{\prime}{\frac{\Pi_{\theta}(a|X)}{\Pi_{0}(a|X)}}.\quad(5)$$ Finally, we perform a weighted average of the auxiliary loss (LRP ) with the regular loss (L0) using a weight term η to get the overall loss as depicted in equation 6. L = (1 − η)L0 + (η)LRP , 0 < η < 1. (6) Additionally, we have parameters, α and β, that control the number of R/P samples per batch and number of augmentations to perform per R/P sample in the training loop respectively. Refer algorithm 2 for more step by step details. ## 3 Experiments 3.1 Setup To evaluate the proposed remediation approach, we conducted online and offline experiments in realworld production settings. In this section, we use the term *baseline policy* to refer to the approach suggested by Kachuee et al. (2022). The proposed framework extend the baseline approach and henceforth referred as *R/P policy*. To simplify the comparisons, we follow the same model architecture and design choices as suggested by Kachuee et al. (2022). In summary, input to the model is a set of routing candidates, i.e., a combination of embedded ASR, NLU, and context vectors as well as skill embeddings. The output is the softmax-normalized propensity of selecting each candidate to handle the user request. The final model has about 12M trainable parameters consisting of a language model to encode utterance, embeddings for contextual signals, and fullyconnected layers. To train and evaluate our models, we use logged data from a current production policy. The observed reward is based on a curated function of user satisfaction metrics. Our dataset consists of about 90M samples roughly divided into 75% training, 12.5% validation, and 12.5% test hold-out sets covering tens of domains with imbalanced number of samples. Our R/P dataset consists of ∼50 samples and split into 67% training and 33% test hold-out sets containing roughly an equal number of regression and progression samples (collected over 10-15 reported defects). We ensure that each incident finds similar representation in both the train and test hold-out set. Data used in this work was de-identified to comply with our customer privacy guidelines. Also, due to confidentiality concerns, we are not able to share specifics about the historical regression incidents. ## 3.2 Metrics1 3.2.1 Remediation Metrics We use *remediation percentage* as a key metric to quantify the percentage of R/P samples with status FAIL that were directed back to the RL policy with status PASS in a single model update using the remediation approach shared in section 2.4. In an ideal scenario we would expect this metric to be as high as possible. It is defined more concretely in equation 7 below where C and C′represent the sample statuses obtained from baseline and R/P policy respectively. $$\sum_{i=0}^{\left|\mathbb{D}_{\mathbb{R}\mathbb{P}}\right|}1_{(C_{i}=F A I L)}-\sum_{i=0}^{\left|\mathbb{D}_{\mathbb{R}\mathbb{P}}\right|}1_{(C_{i}^{\prime}=F A I L)}\ast100\quad(7)$$ ## 3.2.2 Deviation Metrics To validate that the auxiliary R/P loss is not having an adverse effect on other data segments, we track the deviation in *decision replication rate* and the expected reward for the remainder of traffic. In an ideal scenario we would expect both deviation metrics to be as small as possible. 1To comply with our privacy and business guidelines, in all instances, we only report relative and normalized results which do not represent the actual scales or metric values. ## 3.3 Hyperparameters For the train replay loss mix ratio η we use values from {0.02, 0.2} and for noise variance λ we use values from {0, 0.05, 1.0, 2.0, 3.0} to find the best parameters based on the remediation percentage. We particularly note during an ablation that having no noise leads to poor generalization on the R/P hold out set. Consequently, we use a grid search for finding the best setting for the number of R/P samples per batch α ∈ {2, 5, 10} and number of augmentation per R/P sample β ∈ {1, 20, 50} to find the best settings for each benchmark. Based on this search, we finally used η as 0.2, α as 5, β as 20 and λ as 2.0. ## 3.4 Training Details For the baseline policy we trained each model for 8 epochs and take the best performing model based on the macro-averaged violation rate of added domain based constraints measured on the validation set. We used a cluster of 32 NVIDIA V100 GPUs to process a mini-batch size of 32K samples (1000 samples on each GPU). Each individual run took between 14 to 16 hours. During R/P policy training we added an augmented batch of 100 R/P samples (α = 5, β = 20) to each GPU creating a further addition of 3200 samples to each mini-batch. Each experiment was run four times using different random seeds for weight initialization to report the mean and ±2 standard deviation of each result. ## 4 Results We conducted offline experiments and measured off-policy estimated impact of the proposed method on replication and reward metrics. For the estimating the expected reward, we used an IPS estimator. On our training set we observed an average remediation percentage of 70.0% (71.42% for regression and 66.6% for progression samples) indicating that the proposed approach leads to a high assimilation of the defective traffic back to RL policy. The number can also be interpreted as the normalized percentage of reduction in RP samples that used to be handled by the hot fixes and instead be handled correctly by the RL policy. Using this approach we were successfully able to absorb the entire hold out set to the RL policy and identify the potential to retire ∼70% of the representative hot-fixes. Table 1 shows the deviation percentage in decision replication rate and the off-policy estimated reward on the hold out dataset. We see negligible difference between both the policies indicating that the remediation has minimal side-effect on the remaining traffic segments. | Offline | Replication (%) | Expected Reward (%) | |-----------------|-------------------|-----------------------| | Baseline Policy | 98.31±0.0005 | 89.55±0.0005 | | RP Policy | 98.31±0.0071 | 89.56±0.0052 | | Deviation (%) | 0.00±0.0072 | 0.01±0.0054 | Table 1: Comparison of the overall replication and expected reward on our offline test set reported for the baseline and RP policies. We then compared our proposed approach to the baseline on live production traffic in an online A/B based setup consisting of a large number of actual customers. The results in Table 2 show that, similar to our offline analysis, we observed minimal and non-statistically significant deviation in the measured reward between control and treatment. This further validates our claim that the proposed remediation has negligible impact on the remaining traffic segments. | Online | Measured Reward (%) | |-----------------|-----------------------| | Baseline Policy | 87.81 | | R/P Policy | 87.80 | | Deviation (%) | -0.01 (p-value 0.4) | ## 5 Conclusion In this paper, we presented a method to leverage historical regressions reported by customers of a conversational AI to guard-rail against future recurrences of similar issues and to improve the trained policies to learn from such high-value experiences. In summary, the introduced method consists of curating a regression/progression dataset from historical incidences, logic to evaluate future polices on such data prior to the potential online deployment, performing guard-railing against deploying policies that pose a high risk of incident recurrences, and finally leveraging such a high-value dataset as a source of supervision during the training process to enable long-term behavior corrections. We conducted extensive online and offline experiments and deployed this work in a real-world production system to ensure serving best experience for our customers. ## Limitations We believe a potential limitation of this work is its reliance of curated samples from historical incidents. Due to the complexity of real-world conversational agents, the decision to introduce a new sample to the R/P set requires human expert involvement which could be costly and pose challenges in terms of reliability. Another challenge we faced after the deployment of this framework was managing the life-cycle of the collected R/P samples. In a dynamic environment, a regression or progression pattern may lose relevance over time. Therefore, we find it challenging to re-actively deal with retirement of such historical samples. ## Ethics Statement This work is centered on ensuring the best experiences are served by a conversational AI through learning and validation of customer initialed reports. Therefore, we do not assess any particular ethical risks associated with this work. However, one penitential though unlikely risk area would be human expert decisions for data collection to be biased on certain use-cases or interactions. We did not observe manifestation of such risk impacting our experiments and after the production deployment. Regarding human data handling practices, we ensured anonymity of data samples used in this study and did not reveal any specifics that would violate our internal policies or our customer privacy policies. ## References Avinash Balakrishnan, Djallel Bouneffouf, Nicholas Mattei, and Francesca Rossi. 2018. Using contextual bandits with behavioral constraints for constrained online movie recommendation. In *IJCAI*, pages 5802–5804. Miroslav Dudık, Dumitru Erhan, John Langford, and Lihong Li. 2014. Doubly robust policy evaluation and optimization. *Statistical Science*, 29(4):485–511. Matthew Hoffman, Bobak Shahriari, and Nando Freitas. 2014. On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In Artificial Intelligence and Statistics, pages 365–374. PMLR. Thorsten Joachims, Adith Swaminathan, and Maarten de Rijke. 2018. Deep learning with logged bandit feedback. In *International Conference on Learning* Representations. Mohammad Kachuee and Sungjin Lee. 2022. Constrained policy optimization for controlled selflearning in conversational ai systems. arXiv preprint arXiv:2209.08429. Mohammad Kachuee, Jinseok Nam, Sarthak Ahuja, JinMyung Won, and Sungjin Lee. 2022. Scalable and robust self-learning for skill routing in large-scale conversational ai systems. Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Mohammad Kachuee, Hao Yuan, Young-Bum Kim, and Sungjin Lee. 2021. Self-supervised contrastive learning for efficient user satisfaction prediction in conversational agents. In *Proceedings of the 2021* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4053–4064. Nikos Karampatziakis, Sebastian Kochman, Jade Huang, Paul Mineiro, Kathy Osborne, and Weizhu Chen. 2019. Lessons from contextual bandit learning in a customer support bot. *arXiv preprint* arXiv:1905.02219. Zixuan Ke, Mohammad Kachuee, and Sungjin Lee. 2022. Domain-aware contrastive knowledge transfer for multi-domain imbalanced data. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 25–36. Han Li, Sunghyun Park, Aswarth Dara, Jinseok Nam, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, and Ruhi Sarikaya. 2021. Neural model robustness for skill routing in large-scale conversational ai systems: A design choice exploration. arXiv preprint arXiv:2103.03373. Romain Lopez, Inderjit S Dhillon, and Michael I Jordan. 2021. Learning from extreme bandit feedback. Proc. Association for the Advancement of Artificial Intelligence. Sunghyun Park, Han Li, Ameen Patel, Sidharth Mudgal, Sungjin Lee, Young-Bum Kim, Spyros Matsoukas, and Ruhi Sarikaya. 2020. A scalable framework for learning from implicit user feedback to improve natural language understanding in large-scale conversational ai systems. *arXiv preprint arXiv:2010.12251*. Ruhi Sarikaya. 2017. The technology behind personal digital assistants: An overview of the system architecture and key components. IEEE Signal Processing Magazine, 34(1):67–81. Adith Swaminathan, Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudík, John Langford, Damien Jose, and Imed Zitouni. 2016. Off-policy evaluation for slate recommendation. *arXiv preprint* arXiv:1605.04812.
lin-etal-2023-mobilenmt
{M}obile{NMT}: Enabling Translation in 15{MB} and 30ms
https://aclanthology.org/2023.acl-industry.36
Deploying NMT models on mobile devices is essential for privacy, low latency, and offline scenarios. For high model capacity, NMT models are rather large. Running these models on devices is challenging with limited storage, memory, computation, and power consumption. Existing work either only focuses on a single metric such as FLOPs or general engine which is not good at auto-regressive decoding. In this paper, we present MobileNMT, a system that can translate in 15MB and 30ms on devices. We propose a series of principles for model compression when combined with quantization. Further, we implement an engine that is friendly to INT8 and decoding. With the co-design of model and engine, compared with the existing system, we speed up 47.0x and save 99.5{\%} of memory with only 11.6{\%} loss of BLEU. Our code will be publicly available after the anonymity period.
## Mobilenmt: Enabling Translation In 15Mb And 30Ms Ye Lin1∗ , Xiaohui Wang2, Zhexi Zhang2, Mingxuan Wang2, Tong Xiao1,3† , Jingbo Zhu1,3 1NLP Lab, School of Computer Science and Engineering, Northeastern University, Shenyang, China 2 ByteDance 3NiuTrans Research, Shenyang, China {linye2015}@outlook.com {wangxiaohui.neo,zhangzhexi,wangmingxuan.89}@bytedance.com {xiaotong,zhujingbo}@mail.neu.edu.cn ## Abstract Deploying NMT models on mobile devices is essential for privacy, low latency, and offline scenarios. For high model capacity, NMT models are rather large. Running these models on devices is challenging with limited storage, memory, computation, and power consumption. Existing work either only focuses on a single metric such as FLOPs or general engine which is not good at auto-regressive decoding. In this paper, we present MobileNMT, a system that can translate in 15MB and 30ms on devices. We propose a series of principles for model compression when combined with quantization. Further, we implement an engine that is friendly to INT8 and decoding. With the co-design of model and engine, compared with the existing system, we speed up 47.0× and save 99.5% of memory with only 11.6% loss of BLEU. The code is publicly available at https://github.com/zjersey/Lightseq-ARM. ## 1 Introduction As a classic subfield of natural language processing, neural machine translation (NMT) has achieved great success in recent years. Most of the studies focus on improving the accuracy of large machine translation systems, ignoring whether such models are easy to be deployed in real-world scenarios. Here we adopt four metrics to evaluate whether an NMT model is deployment-friendly. (1) **Model** size is the most important metric in model compression (Han et al., 2016). (2) **Floating-point** operations (FLOPs) is commonly used to evaluate computational complexity in neural architecture design. (3) Memory or **Memory mapped I/O** (MMI/O) reflects the memory requirements of the real running system. (4) **Decoding speed** depends on many realistic factors such as engine implementation and the power of avaliable processors. ∗This work is done during the internship at ByteDance. †Corresponding author. ![0_image_0.png](0_image_0.png) In this paper, we propose MobileNMT, a Transformer-based machine translation system that can translate in 15MB and 30ms. First, we propose three principles for designing parameter-limited MT models: 1) To compress embedding, reducing vocabulary size is simple and effective compared to embedding factorization; 2) To compress the encoder and decoder, reducing the model width is much more efficient in computation and memory than cross-layer parameter sharing; 3) Encoder depth is very important to ensure accuracy. To achieve higher accuracy, we adjust the training hyperparameters according to the newly designed structure, and adopt sequence-level knowledge distillation. For industrial deployment, we optimize general matrix multiplication (GEMM) and memory in our own inference engine and use the 8-bit integer for storage and computation. As shown in Table 1, the 10MB MobileNMT achieves 88.4% 368 ![1_image_0.png](1_image_0.png) performance of Transformer-big with only 1.1% size and runs 47.0× faster on decoding, which can be easily deployed and used. Our contributions are summarized as follows: - We propose three principles for parameterlimited MT models to make more efficient use of computation and memory resources. - We adjust training strategies according to the newly designed structure to achieve higher translation accuracy. - We develop a mobile inference engine to bridge the gap between industrial practice and theoretical research. ## 2 Architecture Design Principles For model compression and acceleration, most studies focus on a single metric such as model size or FLOPs, without considering the real-world applications. In this section, we consider four metrics including model size, FLOPs, memory usage, and decoding speed, and then propose three design principles for parameter-limited MT models. We choose Transformer (Appendix A) as our baseline because of its great success in machine translation. ## 2.1 Embedding Compression The vocabulary size V usually reaches tens of thousands in NMT models (Akhbardeh et al., 2021). The parameters can reach tens of millions and greatly affect the overall parameter efficiency. Embedding Factorization (Scaling E). For model compression, embedding factorization has been widely studied (Lan et al., 2020; Grave et al., 2017; Baevski and Auli, 2019). To decouple the Module Dim Base Small Tiny ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) Embed N/A ×1 N/A ×1 N/A ×1 Hidden 512 256 128 Head 8 ×6 4 ×6 2 ×6 FFN 2048 1024 512 Head 8 ×6 4 ×6 2 ×6 FFN 2048 1024 512 Params 64.5M 21.5M 8.0M embedding dimension E and hidden dimension H, it additionally introduces a trainable transformation weight WT ∈ R E×H, where E ≤ H. After factorization, the embedding parameters will be decreased from O(V × H) to O(V ×E +E × H). Reducing Vocabulary Size (Scaling V). A more direct way to compress embedding is to reduce the vocabulary size V . To reduce the risk of out-ofvocabulary words, here we adopt Byte-Pair Encoding (BPE) (Sennrich et al., 2016; Ott et al., 2018; Ding et al., 2019; Liu et al., 2020). For most studies on machine translation, the adopted BPE merge operations range from 30∼40K (Ding et al., 2019). Volt proves that we can find a well-performing vocabulary with higher BLEU and smaller BPE merge operations (Xu et al., 2021). Experiments in Lin et al. (2021)'work also show that smaller vocabularies may be better. Reducing Vocabulary Size Performs Better. To compare the two embedding compression methods, here we select three baseline models of different sizes. The model settings are shown in Table 1. As shown in Table 2, the parameters and FLOPs are almost the same in these two methods. As shown ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) in the first row of Fig. 2, compared to reducing vocabulary size, the model with embedding factorization performs poorly in most cases, especially when the parameters are limited. ## 2.2 Encoder/Decoder Compression For encoder and decoder compression, here we compare models with cross-layer parameter sharing and model width reduction. Cross-Layer Parameter Sharing (Sharing). The most widespread use of parameter sharing is in convolutional neural networks (Long et al., 2015). In recent years, it has also been investigated on NLP and NLU tasks. Among them, cross-layer parameter sharing can provide stronger nonlinearity along the model depth while keeping the parameters unchanged (Dehghani et al., 2019; Takase and Kiyono, 2021; Lan et al., 2020). Reducing Model Width (Width). Since model depth has been proven to be important in natural language processing tasks such as machine translation (Devlin et al., 2019; Liu et al., 2020; Wang et al., 2022; Liu et al., 2020), here we keep the ![2_image_2.png](2_image_2.png) ## Depth Unchanged And Reduce The Model Width. Reducing Model Width Is More Efficient And Quantization-Friendly. In the second row of Fig. 2, these two methods perform nearly the same. However, Table 2 shows that there is a large difference in FLOPs and MMI/O, which means reducing model width is much more efficient in computation and memory. Since it is necessary to quantize these models for greater compression, we further compare the weights and output ranges of the two methods in Fig. 3. It can obviously be observed that models with parameter sharing have larger ranges of values for both weight and output, which is not quantization-friendly. The right figure also verifies this: when we apply post-training quantization (PTQ) (Sung et al., 2015; Banner et al., 2019; Choukroun et al., 2019) to these two methods, cross-layer parameter sharing performs poorly. ## 2.3 Deep Encoder And Shallow Decoder Fig. 4 studies how different dimensions affect the Transformer performance. In order to analyze the impact of each dimension separately, here we only change one specific dimension and keep the others ![3_image_0.png](3_image_0.png) ![3_image_2.png](3_image_2.png) unchanged. The point on the left of the Small Baseline represents scaling one dimension down, while the point on the right represents scaling one dimension up. We can see that Encoder Depth is more important than other dimensions, which is consistent with the related work on large-scale models (Wang et al., 2019, 2022). Based on the above discussion, we finally build a deep encoder and a shallow decoder, while reducing the vocab size and model width. Two MobileNMT models of different sizes are built here and the detailed settings are shown in Table 3. ## 3 Training Strategies 3.1 Pre-Training With Knowledge Distillation In order to improve the performance of compressed models, recent studies distill knowledge from a well-trained full-precision teacher network to a student network (Mishra and Marr, 2018) or directly use a quantized teacher network (Kim et al., 2019). Here we adopt sequence-level knowledge distilla- ![3_image_1.png](3_image_1.png) ![3_image_3.png](3_image_3.png) tion because it has shown to be effective for NMT tasks. The most basic full-precision Transformerbase model is adopted as the teacher. ## 3.2 Quantization The process of quantizing a transformer model can be divided into two steps: 1) constructing quantizers; 2) applying the quantization-aware training (QAT) (Courbariaux et al., 2015) based on the pretrained model we have obtained in Section 3.1. FFN and Attention Quantizers. The original Transformer layer includes two types of sublayers: the attention sublayer and feed-forward network (FFN) (Vaswani et al., 2017). Here we construct the quantizer for each linear in the attention and FFN, and quantize both the weights and activations as shown in Fig. 5. Since most computations are spent on matrix multiplication, all biases and residuals are kept in full precision for accuracy preservation. Since quantization will change the range of network outputs, here we add a learnable weight γito the i-th sublayer to learn how to combine the output and the residual surrounding it. Quantization-Aware Training. Since MobileNMT only has 10M/20M parameters, quantizing such a small model inevitably results in performance loss, so we perform QAT after constructing ![4_image_0.png](4_image_0.png) the quantizers. Before QAT, we pre-compute all scaling parameters based on a forward running on the pre-trained distillation model obtained in Section 3.1. It takes nearly no additional costs, but provides a good initialization. For engineering development, we choose the uniform quantization scheme because of it is hardware-friendly (Liu et al., 2022). For 8-bit quantization, we use the element-wise quantization (Lee et al., 2021). For lower-bit quantization, such as 4-bit integer, we use the row-wise quantization (Faraone et al., 2018). ## 3.3 Training Hyperparameters Compared to the original Transformer model, MobileNMT introduced in Section 2 has fewer parameters and different architectures, so different training hyperparameters are needed. Removing Dropout. Since our models have fewer parameters, we do not need to impose strong regularizations on them and we remove dropout from the entire model. The left part of Fig. 6 shows that removing dropout will lead to an improvement of almost two BLEU points. Larger Learning Rate. Here we follow the configuration provided in Wang et al. (2019) with a larger learning rate (0.01 → 0.02), a larger training batch (4096 → 8192), and more warmup steps (4000 → 8000). As shown in the right part of Fig. 6, it can improve model performance by more than 0.5 BLEU points (red bars). However, after PTQ, the model with 0.02 learning rate performs significantly worse than 0.01 (blue bars). As shown in Fig. 7, the network weights and outputs become larger when using a larger learning rate, which is not quantization-friendly. L2 **Regularization.** To solve the above problem, this paper adopts L2 regularization applied to weight (also called weight decay). It adds the squared magnitude of the network weights as the penalty term to the original loss function and en- ![4_image_1.png](4_image_1.png) courage the weights to be smaller. As shown in the left two parts of Fig. 8, with L2 regularization, both the network weights and output values will become significantly smaller. The right part of Fig. 8 shows the performance of PTQ when applying different degrees of L2 regularization. The red and blue bars represent the model performance before and after PTQ. We can see that L2 regularization does improve the model performance after PTQ. ## 4 The Engine This section introduces the detailed implementations of our inference engine. ## 4.1 Gemm Optimization According to statistics on the ONNX Runtime platform, general matrix multiplication (GEMM) accounts for 80.44% of the overall decoding time, demonstrating that optimizing GEMM is the key to decoding speed up. We optimize GEMM from three aspects: (1) Replacing 32-bit floating points Transformer-big 218 ↑1× 872 ↑1× 2886.6 ↑1.0× 1281.5 ↑1.0× 28.36 ∆-0.00 26.75 ∆-0.00 Transformer-base 65 ↑3× 260 ↑3× 908.5 ↑3.2× 332.3 ↑3.9× 27.40 ∆-0.96 25.81 ∆-0.94 Transformer-small 22 ↑10× 88 ↑10× 759.5 ↑3.8× 158.0 ↑8.1× 24.20 ∆-4.61 23.91 ∆-2.84 Transformer-tiny 8 ↑27× 32 ↑27× 398.9 ↑7.2× 73.0 ↑17.6× 20.97 ∆-7.39 21.53 ∆-5.22 MobileNMT-20MB 20 ↑11× 20 ↑44× 26.0 ↑111.2× 46.3 ↑27.7× 27.09 ∆-1.27 25.72 ∆-1.03 MobileNMT-10MB 10 ↑22× 10 ↑87× 14.9 ↑194.0× 27.3 ↑47.0× 25.08 ∆-3.28 24.85 ∆-1.90 Transformer-big 259 ↑1× 1036 ↑1× 2987.6 ↑1.0× 1345.6 ↑1.0× 39.05 ∆-0.00 44.12 ∆-0.00 Transformer-base 86 ↑3× 344 ↑3× 944.8 ↑3.2× 358.9 ↑3.7× 38.64 ∆-0.41 43.80 ∆-0.32 Transformer-small 22 ↑12× 88 ↑12× 782.3 ↑3.8× 178.5 ↑7.5× 34.76 ∆-4.29 40.01 ∆-4.11 Transformer-tiny 8 ↑32× 32 ↑32× 418.8 ↑7.1× 80.3 ↑16.8× 30.36 ∆-8.69 36.01 ∆-8.11 MobileNMT-20MB 20 ↑13× 20 ↑52× 26.7 ↑111.9× 53.7 ↑25.1× 37.67 ∆-1.38 43.81 ∆-0.31 MobileNMT-10MB 10 ↑26× 10 ↑104× 15.8 ↑189.1× 28.9 ↑46.6× 36.00 ∆-3.05 41.87 ∆-2.25 System Params (M) Size (MB) Memory (MB) Latency (ms) Test Valid with 8-bit integers in GEMM for model quantization. (2) The Arm instruction set we use allows multiple integers to be processed in parallel in a single instruction, which takes full advantage of the processor throughput. (3) To improve the cache hit and the register usage, we adjust the layout of the tensor in memory to ensure that the instruction reads data from continuous space. Specifically, we convert each 4 × 4 block in the original layout into a contiguous vector of size 16. An example can be seen in Fig. 9. ## 4.2 Memory Optimization As shown in Fig. 10 in the appendix C, except for GEMM, other operations account for only 19.56% of the decoding time but will be frequently performed, resulting in a large amount of temporary memory. To improve memory efficiency, we take two strategies: (1) To avoid frequent memorymapped I/O and footprint, our engine integrates all adjacent fine-grained operations between two GEMM operations into one fused operation. (2) To save temporary memory, different operations are allowed to share the same space, provided that these operations do not interfere with each other at the same time. Through memory sharing, only two 8-bit memory buffers, and one 32-bit buffer need to be pre-allocated in the Transformer encoder to hold intermediate results. | En-De En-Fr | |---------------| ## 5 Experiments 5.1 Setups We evaluate our methods on two WMT benchmarks. For the WMT14 En-De task (4.5M pairs), we choose *newstest-2013* as the validation set and newstest-2014 as the test set. For the WMT14 EnFr task (35M pairs), we validate the system on the combination of newstest-2012 and newstest2013, and test it on newstest-2014. Details of the architecture were introduced in Section 2, and training hyperparameters were introduced in Section 3. For model compression ratio and decoding speed up, we choose Transformer-big as the benchmark (1.0×). Other details of experimental setups are introduced in Appendix D. ## 5.2 Results Table 4 shows the results of different systems on WMT14 En-De and En-Fr. Table 5 shows the comparison of MobileNMT with other parameter-efficient methods based on Transformer. MobileNMT-10MB and MobileNMT-20MB are | System | Params(M) FLOPs(G) BLEU w/ w/o | | | | |------------------------------------|----------------------------------|-------|-------|-------| | Transformer-base | 65 | 44 | 1.9 | 27.40 | | DeLighT | 37 | 31.4 | - | 27.60 | | Universal Transformer | N/A | 7.4 | 1.9 | 26.20 | | Lite Transformer (small) | N/A | 2.9 | 0.2 | 22.50 | | Lite Transformer (medium) N/A 11.7 | 0.7 | 25.60 | | | | Lite Transformer (big) | N/A 17.3 | 1.0 | 26.50 | | | EdgeFormer w/o LA | N/A | 8.6 | 1.8 | 26.50 | | EdgeFormer (Adapter-LA) N/A | 9.4 | 1.8 | 26.90 | | | EdgeFormer (Prefix-LA) | N/A | 8.6 | 1.9 | 26.80 | | MobileNMT-10MB | 10 | 7.9 | 0.3 | 25.08 | | MobileNMT-20MB | 20 | 17.7 | 0.6 | 27.09 | two models we have built with different sizes, which are introduced in Table 3. On WMT14 En-De, our MobileNMT-10MB requires only 4.6% of the parameters to maintain 88.4% performance of Transformer-big, while it achieves 87.2× compression ratio and 47.0× speed up. Our MobileNMT-20MB can maintain 95.5% performance of Transformer-big with only 9.2% parameters, while it achieves 43.6× compression ratio and 27.7× speed up. Experiments on En-Fr show similar results. In addition, thanks to the memory optimization strategies adopted in our engine, MobileNMT requires significantly less running memory than other models (0.5%∼0.9% of Transformer-big). All these experiments demonstrate that MobileNMT is efficient in terms of parameters, computation, and memory, and can be easily deployed on mobile devices. ## 6 Conclusion We propose MobileNMT, a Transformer-based machine translation system that can translate in 15MB and 30ms. It uses existing resources efficiently and can be easily deployed in real-world scenarios. We develop a mobile inference engine with GEMM and memory optimization, hoping that it can bridge the gap between theoretical research and real-world applications on efficient machine translation. ## Acknowledgments This work was supported in part by the National Science Foundation of China (No. 62276056), the National Key R&D Program of China, the China HTRD Center Project (No. 2020AAA0107904), the Natural Science Foundation of Liaoning Province of China (2022-KF-16-01), the Yunnan Provincial Major Science and Technology Special Plan Projects (No. 202103AA080015), the Fundamental Research Funds for the Central Universities (Nos. N2216016, N2216001, and N2216002), and the Program of Introducing Talents of Discipline to Universities, Plan 111 (No. B16009). ## Limitations Multilingual Translation. Here we mainly discuss the design principles of efficient architectures for bilingual machine translation. Compared with bilingual translation, multilingual translation tasks require significantly more parameters and computations to perform well, and different model scales may lead to different design considerations. We will leave this for future exploration. Knowledge Distillation. As a small model that requires only 10MB/20MB of storage, MobileNMT will inevitably suffer from performance loss compared to other Transformer-based models. To reduce performance loss, here we adopt knowledge distillation and choose the Transformer-base model as the teacher. From a training efficiency perspective, although the teacher model can help MobileNMT improve performance, it also introduces additional training costs. Compatibility. Here our inference engine only provides implementation for the ARM CPU. We will make it available for other AI accelerator (such as NPU) on mobile devices in the future. ## References Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussà, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, WMT@EMNLP 2021, Online Event, November 1011, 2021, pages 1–88. Association for Computational Linguistics. Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. *CoRR*, abs/1607.06450. Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Ron Banner, Yury Nahshan, and Daniel Soudry. 2019. Post training 4-bit quantization of convolutional networks for rapid-deployment. In *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7948–7956. Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. 2019. Low-bit quantization of neural networks for efficient inference. In *2019 IEEE/CVF* International Conference on Computer Vision Workshops, ICCV Workshops 2019, Seoul, Korea (South), October 27-28, 2019, pages 3009–3018. IEEE. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Binaryconnect: Training deep neural networks with binary weights during propagations. In *Advances in Neural Information Processing* Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 3123–3131. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In *7th International Conference on* Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In *Proceedings of Machine Translation Summit XVII Volume 1:* Research Track, MTSummit 2019, Dublin, Ireland, August 19-23, 2019, pages 204–213. European Association for Machine Translation. Julian Faraone, Nicholas J. Fraser, Michaela Blott, and Philip H. W. Leong. 2018. SYQ: learning symmetric quantization for efficient deep neural networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 4300–4309. Computer Vision Foundation / IEEE Computer Society. Tao Ge, Si-Qing Chen, and Furu Wei. 2022. Edgeformer: A parameter-efficient transformer for ondevice seq2seq generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 10786–10798. Association for Computational Linguistics. Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. 2017. Efficient softmax approximation for gpus. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of *Proceedings of Machine* Learning Research, pages 1302–1310. PMLR. Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4107–4115. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 2704–2713. Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, and Nojun Kwak. 2019. QKD: quantization-aware knowledge distillation. *CoRR*, abs/1911.12491. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Junghyup Lee, Dohyung Kim, and Bumsub Ham. 2021. Network quantization with element-wise gradient scaling. In *IEEE Conference on Computer Vision* and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 6448–6457. Computer Vision Foundation / IEEE. Ye Lin, Yanyang Li, Tong Xiao, and Jingbo Zhu. 2021. Bag of tricks for optimizing transformer efficiency. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 4227–4233. Association for Computational Linguistics. Xiaodong Liu, Kevin Duh, Liyuan Liu, and Jianfeng Gao. 2020. Very deep transformers for neural machine translation. *CoRR*, abs/2008.07772. Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric P. Xing, and Zhiqiang Shen. 2022. Nonuniform-touniform quantization: Towards accurate quantization via generalized straight-through estimation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 4932–4942. IEEE. Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3431–3440. IEEE Computer Society. Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Delight: Deep and light-weight transformer. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed precision training. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Asit K. Mishra and Debbie Marr. 2018. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. In *6th International* Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Toan Q. Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of selfattention. *CoRR*, abs/1910.05895. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 - November 1, 2018, pages 1–9. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, WMT 2018, Belgium, Brussels, October 31 - November 1, 2018, pages 186–191. Association for Computational Linguistics. Jerry Quinn and Miguel Ballesteros. 2018. Pieces of eight: 8-bit neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 3 (Industry Papers), pages 114–120. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Wonyong Sung, Sungho Shin, and Kyuyeon Hwang. 2015. Resiliency of deep neural networks under quantization. *CoRR*, abs/1511.06488. Sho Takase and Shun Kiyono. 2021. Lessons on parameter sharing across layers in transformers. *CoRR*, abs/2104.06022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. Deepnet: Scaling transformers to 1, 000 layers. *CoRR*, abs/2203.00555. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1810–1822. Association for Computational Linguistics. Ephrem Wu. 2020. Learning accurate integer transformer machine-translation models. *CoRR*, abs/2001.00926. Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, and Song Han. 2020. Lite transformer with long-short range attention. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tie-Yan Liu. 2020. On layer normalization in the transformer architecture. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 10524–10533. PMLR. Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, and Lei Li. 2021. Vocabulary learning via optimal transport for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 7361–7373. Association for Computational Linguistics. ## A Transformer Architecture We chose Transformer for study because it is one of the most successful neural models for machine translation. It consists of a N-layer encoder and a M-layer decoder, where N=M=6 in the original Transformer-base and Transformer-big. Each encoder layer consists of two sublayers, including the self-attention and feed-forward network (FFN). Each decoder layer has an additional crossattention sublayer to bridge the encoder and decoder. The self-attention takes the output X of the previous sublayer as its input. The cross-attention is similar to the self-attention, except that it takes the encoder output as an additional input. Both types of attention first compute the attention distribution Ax and then average X by Ax. We denote the transformation matrices of Q, K, V as Wq, Wk, Wv, the subsequent transformation matrices as Wo, and the attention as Ya = Attn(X), then: $$\begin{array}{c}{{-}}\\ {{A_{x}=\mathrm{SoftMax}(\frac{X W_{q}W_{k}^{T}X^{T}}{\sqrt{d}})}}\\ {{Y_{a}=A_{x}X W_{v}W_{o}}}\end{array}$$ The FFN applies non-linear transformation to its input X. We denote the FFN as Yf = FFN(X): Yf = ReLU(XW1 + b1)W2 + b2 (3) where W1 and b1 denote the weight and bias of the first linear transformation, W2 and b2 are parameters of the second transformation. Here we preprocess each sublayer input by the layer normalization (Ba et al., 2016). All sublayers are coupled with the residual connection (He et al., 2016a). ## B Ptq And Qat As an appealing solution to model compression, quantization enables the model to use lower-bit values (such as 8-bit integer) to compute faster and consume less storage space (Hubara et al., 2016; Micikevicius et al., 2018; Quinn and Ballesteros, 2018; Jacob et al., 2018). Post-Training Quantization (PTQ) can be seen as the basis for Quantization Aware Training (QAT), it adds quantization nodes to a well-trained floatingpoint model. To quantize a floating-point tensor r to a tensor with n bits, a scale s is introduced to map these two types of values (Wu, 2020): $$s={\frac{\operatorname*{max}(r)-\operatorname*{min}(r)}{2^{n}-1}}$$ $$\quad(4)$$ | System | Params (M) | Size (MB) | BLEU | |--------------------|--------------|-------------|--------| | Transformer-base | 65 | 260 | 27.40 | | + Reducing Vocab | 48 | 192 | 26.20 | | + Reducing Width | 10 | 40 | 22.01 | | + Other Dimensions | 10 | 40 | 22.54 | | + Distillation | 10 | 40 | 23.77 | | + Quantization | 10 | 10 | 23.76 | | + Hyperparameters | 10 | 10 | 25.48 | | + Greedy Search | 10 | 10 | 25.08 | To get a faster computation speed, both weights and activations will be quantized to n-bit. Suppose rm = min(r), the quantization function is: $$Q(r)=\left\lfloor(r-r_{m})/s\right\rfloor\times s+r_{m}\qquad\mathrm{(5)}$$ (1) $\binom{2}{2}$ . where b·e represents rounding to the nearest integer. However, in PTQ, applying quantization directly to the floating-point network will result in significant performance losses. Based on PTQ, QAT simulates the behavior of n-bit computation by minimizing quantization errors during training, which helps the model achieve higher accuracy. In addition to the learnable weights of the model itself, s is also learnable. ## C Operations Except Gemm Since general matrix multiplication (GEMM) accounts for 80.44% of the overall decoding time, we have concluded that optimizing GEMM is the key to decoding speed up in Section 4. As for operations except GEMM, Fig. 10 shows the proportion of running time in the decoding process. The corresponding data is measured in 32-bit floating point format on the ONNX Runtime. ## D Setups All sentences were segmented into sequences of sub-word units (Sennrich et al., 2016). In the implementation, we adopt the normalization before layers (Baevski and Auli, 2019; Xiong et al., 2020; Nguyen and Salazar, 2019). Most previous work only shared source and target vocabularies on the En-De task. In our MobileNMT, both En-De and En-Fr adopt shared vocabularies for efficiency reasons, which leads to a larger compression gain at the expense of performance. We test on the model ensemble by averaging the last 5 checkpoints and report SacreBLEU scores (Post, 2018). ![10_image_0.png](10_image_0.png) | System | Params | Bits | Size | BLEU | |------------------|----------|----------|--------|--------| | (M) | (W-E-A) | (MB) | | | | Transformer-base | 65 | 32-32-32 | 260 | 27.40 | | 10 | 32-32-32 | 40 | 25.79 | | | 10 | 8-8-8 | 10 | 25.08 | | | 10 | 4-8-8 | 5 | 25.43 | | | 10 | 3-8-8 | 3.75 | 24.09 | | | 10 | 2-8-8 | 2.5 | 21.25 | | | MobileNMT-10MB | 20 | 32-32-32 | 80 | 27.30 | | 20 | 8-8-8 | 20 | 27.09 | | | 20 | 4-8-8 | 10 | 26.96 | | | 20 | 3-8-8 | 7.5 | 26.23 | | | 20 | 2-8-8 | 5 | 24.33 | | | MobileNMT-20MB | | | | | For the experiments of MobileNMT in Table 4, we use the greedy search algorithm in our engine. Compared with beam search, greedy search can lead to more efficient decoding. For the experiments of TFLite in Table 4, since TFLite will expand all loop subgraphs, it is hard to support the entire decoding process (30 steps) of the Transformerbig/base model with limited memory (6GB in Google Pixel 4). For the memory of these two models, we only record the running memory of 1 step. For the corresponding latencies, we estimate the 30-step latency according to the 1-step and 5step latencies. It is worth noting that except for the memory and latency on Transformer-big/base, all other data statistics are measured in real-world. ## E.1 Ablation Study Table 6 summarizes how each part of Section 2 and Section 3 affects the overall performance. Each row in Table 6 represents the result of applying the current part to the system obtained in the previous row. To reduce the model parameters from 65M to 10M, the model performance decreased from 27.40 to 22.54, which illustrates the importance of network parameters on model capacity. We observe that both knowledge distillation and tuning hyperparameters can bring significant performance improvements (from 22.54 to 25.48), which effectively compensate for the performance loss caused by parameter reduction. ## E.2 Quantization Study Table 7 studies how performance changes when quantizing the model to lower bits (i.e., 4-bit, 3-bit, and 2-bit). As introduced in Section 3.2, for 8-bit quantization, we use the element-wise quantization method (Lee et al., 2021). For lower-bit quantization, we use the row-wise quantization for accuracy preservation (Faraone et al., 2018). As shown in Table 7, 8-bit and 4-bit quantization have almost no negative effect on model performance. When quantizing the model to lower bits, such as 3-bit and 2-bit integers, model performance will drop dramatically. ## E Analysis
xiao-2023-multi
Multi-doc Hybrid Summarization via Salient Representation Learning
https://aclanthology.org/2023.acl-industry.37
Multi-document summarization is gaining more and more attention recently and serves as an invaluable tool to obtain key facts among a large information pool. In this paper, we proposed a multi-document hybrid summarization approach, which simultaneously generates a human-readable summary and extracts corresponding key evidences based on multi-doc inputs. To fulfill that purpose, we crafted a salient representation learning method to induce latent salient features, which are effective for joint evidence extraction and summary generation. In order to train this model, we conducted multi-task learning to optimize a composited loss, constructed over extractive and abstractive sub-components in a hierarchical way. We implemented the system based on a ubiquiotously adopted transformer architecture and conducted experimental studies on multiple datasets across two domains, achieving superior performance over the baselines.
## Multi-Doc Hybrid Summarization Via Salient Representation Learning Min Xiao Microsoft mixia@microsoft.com ## Abstract Multi-document summarization is gaining more and more attention recently and serves as an invaluable tool to obtain key facts among a large information pool. In this paper, we proposed a multi-doc hybrid *summarization* approach, which simultaneously generates a human-readable summary and extracts corresponding key evidence giving a multi-doc input. To fulfill that purpose, we crafted a salient representation learning method to induce latent noteworthy features, which are effective for *joint* evidence extraction and summary generation. In order to train that model, we performed multi-task learning to optimize a composite loss, which is hierarchically constructed over the extractive and abstractive sub-components. We implemented such a fine system based on a ubiquitously-adopted transformer architecture and conducted experiments on a variety of datasets across two domains, achieving superior performance than the baselines. ## 1 Introduction Multi-document summarization (MDS) aims to produce a concise summary of non-redundant salient facts based on multiple source documents under the same topic (Cao et al., 2017; Yasunaga et al., 2017), which is prevailingly useful in many application domains such as news-wire article summarization (Fabbri et al., 2019; Gu et al., 2020; Lee et al., 2022), scientific literature comparison (Lu et al., 2020; Shen et al., 2022a), civil rights lawsuits summarization (Shen et al., 2022b), and many others (Bražinskas et al., 2021; DeYoung et al., 2021). Two main principled approaches are developed accordingly, multi-doc extractive methods (Cao et al., 2015a,b; Yin and Pei, 2015; Zhang et al., 2017) and multi-doc abstractive methods (Bražinskas et al., 2020; Amplayo and Lapata, 2021; Liu and Liu, 2021; Nan et al., 2021a; Pang et al., 2021). Extractive multi-doc approaches intend to directly ![0_image_0.png](0_image_0.png) extract non-redundant salient information from the original source (Mao et al., 2020; Wang et al., 2020; Parnell et al., 2022). It is usually carried with two stages, salience prediction and redundancy detection, where the former is trained with (pseudo- )salience labels (Cao et al., 2015b; Mao et al., 2020; Wang et al., 2020), and the latter is employed with ranking/selection tricks such as Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998; Zhang et al., 2017; Mao et al., 2020). Nonetheless, abstractive multi-doc approaches suggest paraphrasing the input to rewrite a smooth summary (Fabbri et al., 2019; Lewis et al., 2020; Ernst et al., 2022), where recent works are usually designed with dedicated components such as special attentions/encoders (Fabbri et al., 2019; Zhang et al., 379 ![1_image_0.png](1_image_0.png) 2020a; Pasunuru et al., 2021; Parnell et al., 2022; Song et al., 2022; Xiao et al., 2022). In this paper, we advocate for a *hybrid* MDS system, which produces a paraphrased summary, achieved by the abstractive prediction module, attached with corresponding salient evidence, obtained from the *joint* extractive prediction module, where both modules are coupled and learned together, within an encoder-decoder-architectured transformer (Vaswani et al., 2017; Lewis et al., 2020; Xiao et al., 2022). *This is motivated from* real-world business scenarios where enterprise customers are specifically requesting evidence extraction and alignment for abstractive summaries, which, as far as we know, was not fully supported in most commercialized AI platforms. Please refer to Figure 1 for one such example to better understand it. In order to train such a hybrid system, we conducted multi-task learning to jointly optimize an extraction loss and generation loss. To align those two prediction tasks (*aka.*, summary generation and evidence extraction) appropriately, we exploited a salience attention with a gating mechanism, to effectively induce salient features. We present the overall architecture in Figure 2. In brief, given a multi-doc input, we feed it to our enhanced transformer for salient representation learning to jointly predict a final summary and extract notable evidence. Empirically, we carried extensive quantitative evaluations across multiple datasets, on various metrics such as ROUGE scores (Lin, 2004; Peng et al., 2021), perplexity (Jelinek et al., 1977), BERTScore (Zhang et al., 2020b), besides a manual qualitative evaluation with case studies (Novikova et al., 2017). In summary, our contributions are twofold: 1) We proposed a novel multi-doc hybrid summarization system to generate linguistically-smooth summaries and to extract corresponding key evidence, based on the multi-doc inputs; 2) We conducted thorough empirical studies to quantitatively validate the effectiveness of the proposed approach and manually verify the quality of the extracted salient evidence. ## 2 Proposed Approach Let x be a multi-doc input with n documents of the same topic, x = {di} n i=1, and each document be a sequence of sentences, *e.g.*, di = hs i1 , . . . , s i j , . . .i, where s i j is the j-th sentence of the i-th document di. Similarly, each sentence is a *sequence* of words/tokens, *e.g.*, s i j = hw i,j 1 , . . . , w i,j k , . . .i, where w i,j kindicates the k-th word/token of sentence s i j . In this work, we are developing a prediction machine M : *X → Y × S × Z* to map the input x to a triplet output such that M(x) = y, {sj0} q j0=1, {zj0} q j0=1, of which y is an abstractive summary, {sj0} q j0=1 is an evidence list of extracted q salient sentences, and zj0 is the corresponding salient score for sj0. Based on this hierarchical input, we first flatten the whole word sequences and insert a documentseparator token, <d>, utilized to capture global interactions (Xiao et al., 2022), along with a sentence-separator token, <s>, used to assist sentence salience extraction. We then feed that flat sequence to our enhanced transformer (Figure 2), for joint salient representation learning and multitask training/prediction. ## 2.1 Salient Representation Learning Recently, encoder-decoder-based transformer has achieved great successes in the literature for a variety of generation tasks (Lewis et al., 2020), such as machine translation, summarization, and etc. Hence, our hybrid summarization system performed salient representation learning by enhancing such a transformer. Considering that we are tackling multi-doc inputs, where each instance (multiple documents) could be *much longer* than a single sentence or a short paragraph, we choose the longformer-encoder-decoder (LED) (Beltagy et al., 2020) as our backbone to implement such enhanced transformer, due to the fact that it has already demonstrated its capability to efficiently handle that longer sequences. In general, our enhanced components consist of a *salient evidence extraction* module, which is constructed on top of the transformer encoder, and a *dynamic salience integration* module built across the encoder and decoder. Let he ∈ R l×m be the hidden output of the transformer encoder, and P : R l×· → R ls×· be the sentence selection/filtering operator, where l denotes the whole sequence length and ls denotes the number of sentence candidates. We first use a fully-connected feed forward (FF) layer with a dropout, followed with sentence candidate selection, to predict salience scores such that1 $${\hat{\mathbf{z}}}={\mathcal{P}}\left({\mathrm{FF}}\left({\mathsf{Dropout}}\left(\mathbf{h_{e}}\right)\right)\right)\in\mathbb{R}^{l_{s}}.$$ $$(1)$$ ls. (1) Then, we use the predicted salience scores ˆz to induce an attention weight for each sentence candidate and apply them to the sentences' hidden representations for self-aggregation. In particular, we adopt a salience attention to produce a *global* salience vector hs such that, $${\overline{{\mathbf{h_{s}}}}}=\sum_{j=1}^{l_{s}}a_{j}{\mathcal{P}}\left(\mathbf{h_{e}}\right)_{j}\in\mathbb{R}^{m},\qquad\quad(2)$$ where aj =exp(ˆzj ) Pj0 exp(ˆzj0) $\mathbf{z}=\frac{\exp(\hat{\mathbf{z}}_j)}{\sum_{j}\exp(\hat{\mathbf{z}}_{j,i})}$ is the attention weight for the j-th sentence. Further, we connect this predicted global salience feature with decoder via dynamic integration during auto-regressive text generation. Specifically, we employ a gating mechanism (Zhou et al., 2016) to inject global salience features. Let hd ∈ R m be the hidden output of the transformer decoder at one particular step, with this gating mechanism, the model is capable to dynamically select salience representations, such that, $$\begin{array}{l}{{\mathbf{o_{s}=\overline{{\mathbf{h_{s}}}}\odot\mathbf{g+h_{d}}\odot(1-\mathbf{g})\in\mathbb{R}^{m},}}}\\ {{\mathbf{g=Sigmoid\left(FF\left(\mathbf{h_{g}}\right)\right),}}}\\ {{\mathbf{h_{g}=GELU\left(Dropout\left(FF\left(\overline{{\mathbf{h_{s}}}}\oplus\mathbf{h_{d}}\right)\right)\right),}}}\end{array}\tag{3}$$ where is element-wise product and ⊕ denotes concatenation, Sigmoid (·) and GELU (·) are the nonlinear activation functions (Hendrycks and Gimpel, 2016; Mishkin and Matas, 2016). We then use this dynamically gated salience representation os to generate a summary ˆy. Note that the predicted salient scores ˆz will be sorted to retrieve salient sentences/evidence (ˆs) during the inference stage. ## 2.2 Multi-Task Learning And Prediction Due to the dual functionalities, we conduct a joint optimization wrt. the salient evidence extraction loss and the abstractive summary generation loss, such that $${\mathcal{I}}=\ell_{1}\left(\mathbf{y},{\hat{\mathbf{y}}}\right)+\alpha\ell_{2}\left(\mathbf{z},{\hat{\mathbf{z}}}\right),$$ .) $\Box$ where α is a trade-off parameter, `1(·) denotes the abstractive loss, *e.g.*, token-level cross-entropy loss, 1For notation simplicity, we do not differentiate row and column vectors, assuming that the dimensionalities of them can be inferred from the context. | Source | Target | Source → Target | | | | | | | | | | |----------|------------|-------------------|------|--------|-------|-------|-------|----------|---------|-------------|-----------------------| | Domain | Dataset | Samples | Docs | Words | Sents | Words | Sents | Coverage | Density | Compression | Ref | | News | MULTI-NEWS | 56,216 | 2.7 | 2164.5 | 84.3 | 264.0 | 10.0 | 0.83 | 5.01 | 8.18 | (Fabbri et al., 2019) | Table 1: The dataset statistics on the news domain, including document/sentence/word counts and source-to-target coverage/density/compression ratios. | Test | Lexical | Semantic | Fluency | Pred Summary | | | | | | | |---------|------------|------------|-----------|----------------|---------|---------|--------|-------|-------|-------| | Domain | Dataset | Samples | Method | R-1f1 ↑ | R-2f1 ↑ | R-Lf1 ↑ | BSf1 ↑ | PP↓ | Words | Sents | | News | MULTI-NEWS | 5,622 | BART | 49.75 | 20.00 | 24.76 | 35.61 | 13.22 | 263.4 | 9.6 | | PRIMERA | 47.53 | 19.58 | 24.95 | 35.10 | 13.89 | 207.0 | 8.1 | | | | | MHS-SRL | 49.81 | 20.68 | 25.67 | 36.24 | 12.42 | 252.4 | 9.8 | | | | Table 2: Empirical results on the news domain, where we underlined the numbers if the proposed system outputs the baselines, and used the **bold-font** to highlight the best number for each metric. | Split | | | | | | |---------|------------|---------|--------|-------|-------| | Domain | Dataset | Samples | Train | Valid | Test | | NEWS | MULTI-NEWS | 56,216 | 44,972 | 5,622 | 5,622 | Table 3: The dataset-split stats on the news domain. and `2(·) denotes the extraction loss, *e.g.*, salience score error such as MSE. As we can see, this paradigm of training requires labeled information for all candidate sentences. Hence, we employed a weakly supervised method to construct salience scores. Specially, we designed a hierarchical scoring method by first calculating the lexical coverages, *aka.*, ROUGE-1, ROUGE-2, ROUGE-L (Lin, 2004), against the same gold summary and averaging them based on their F1 scores. Next, we computed a factuality-style score, NERprecision (Nan et al., 2021b) and averaged them to produce the pseudo salience labels for supervised training. In summary, our systems share the similar functionality, respectively, with the extractive and abstractive method. Nevertheless, we performed them in a *joint hierarchical* way. Besides, our salience labels are more robust as we explored a composite method to combine lexical coverage measures (Lin, 2004) and a factuality-style measure (Nan et al., 2021b). Moreover, we used a dynamic gating mechanism to effectively integrate global salience during summary generation, enabling better alignment between two prediction tasks (*aka.*, summary generation vs. evidence extraction). ## 3 Experiments On The News Domain In this section, we present empirical investigations for the proposed summarization system on news domain to justify its effectiveness. ## 3.1 Experimental Setup The MULTI-NEWS dataset2(Fabbri et al., 2019) contains human-written summaries from the newser.com3site by professional editors, which are based on/linked with multiple source articles.We calculated important stats for this dataset in Table 1 and Table 3. In particular, we used the spaCy library4 with the en_core_web_sm model to calculate word and sentence counts and used the Grusky et al. (2018)'s original tool to compute the coverage/density/compression ratios. We compare our proposed system, **MHS-SRL** with two baselines: BART (Lewis et al., 2020), which is widely used for summarization tasks, and P**RIMERA** (Xiao et al., 2022), which is tailored to MDS systems. We set the batch size to be 8 and learning rate to be 3e−5, choose α from {0.1, 1, 10, 102} based on the validation performance (dev/validation set), and used the Huggingface tool5for implementations. ## 3.2 Main Results For each method, we finetuned it on the target dataset. We compare the proposed summarization system to the corresponding baselines with various metrics in Table 2, including the ROUGE scores (Lin, 2004) to estimate the lexical coverage, the 2https://github.com/Alex-Fabbri/Multi-News 3https://www.newser.com/ 4https://spacy.io/ 5https://huggingface.co | Gold Summary | - Kathryn Schurtz and Joseph Kearney were on their way to their wedding when the unthinkable happened Wednesday. The New Jersey couple was driving to Pittsburgh for the nuptials when a crash occurred ahead of them on I-78 in Windsor Township, Pa. Traffic slowed, but a tractortrailer behind the couple wasn't able to stop in time and hit their car, pushing it into the back of another tractor-trailer. That crash caused a fiery chain reaction ultimately involving the couples´ car, which caught on fire, and five tractor-trailers, the Reading Eagle reports. Three other people were injured, NBC Philadelphia reports. Now, instead of a wedding, family will be attending funeral services. Schurtz, 35, "will be remembered for her voracious appetite for reading, love of cooking, and trailblazing new adventures with Joseph," says her obituary. Born and raised in New Jersey, she graduated from George Washington University before getting her MBA at Notre Dame. She was working for Oracle Data Cloud in New York City, and living back in New Jersey, when she was killed. Little information was available on her fiance, but NJ.com reports that per his Facebook profile, Kearney was originally from Pittsburgh. (This couple was married just one and a half hours before tragedy struck.) | |--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Generated Summary | - A New Jersey couple was on their way to their dream wedding when they got caught up in a deadly chain-reaction crash on a busy Pennsylvania highway on the day before. Kathryn Schurtz, 35, and Joseph Kearney, 34, of Jersey City were driving to their wedding in Pittsburgh on Wednesday when they were hit from behind by a tractor-trailer and pushed into the back of another truck, which then pushed them into another truck and set them on fire, reports the Star-Ledger. The two were killed and three others were injured in the second crash on I-78 in Berks County, Pa., around 2:30pm, reports WNEP. Police say traffic was heavy in both westbound and eastbound lanes of the highway at the time of the crash, which occurred after an earlier crash. The crash slowed and stopped traffic for hours, and the highway was shut down for hours after the first crash. One lane of westbound traffic opened up around 7pm, but the two westbound lanes were still shut down as of Thursday morning. A family friend tells WFMZ that the couple, who had planned to marry in Pittsburgh, had been planning a wedding for a year and a half. "They were just so excited to get married," the friend says. "It was a dream come true." A GoFundMe page has been set up for the family, which has raised more than $10,000 so far. | | Extracted Evidence | A New Jersey couple on their way to get married in Pittsburgh were killed in a fiery chain-reaction crash last week on Interstate 78 in Berks County, Pennsylvania. (0.1461) Kathryn Schurtz, 35, and her fiancé, Joseph Kearney, were driving on I-78 westbound in Windsor Township, Berks County, on Wednesday around 2:30 p.m. when the accident occurred. (0.1324) What to Know Loved ones are mourning a New Jersey couple who died in a chain reaction crash on I-78 in Berks County while driving to their wedding. (0.1251) Last week the couple was on their way to Pittsburgh the day before their dream wedding, when they got caught up in a tractor trailer crash in Berks County. (0.1205) Loved ones are mourning a New Jersey couple who died in a chain reaction crash on I-78 while on the way to their wedding. (0.1172) Kathryn M. Schurtz, 35, and Joseph D. Kearney, both of Jersey City, died in the collision Wednesday afternoon in Windsor Township. (0.1105) Kathryn Schurtz, 35, and her fiancé, Joseph Kearney, both died in the accident which involved their vehicle and five tractor trailers. (0.1099) Two people were killed and three were injured in a fiery chain-reaction crash on Interstate 78 in Windsor Township on Wednesday. (0.1080) It struck the couple's vehicle, which was then pushed into the back of another tractor-trailer and set off a chain reaction crash that included three more tractor-trailers. (0.1079) The couple was on the way to their wedding in Pittsburgh, Pennsylvania, at the time of the crash. (0.1035) Table 4: Examples on the MULTI-NEWS dataset (1). | BERTScore (Zhang et al., 2020b) for semantic similarity6, and the perplexity (Jelinek et al., 1977) for language smoothness. In addition, we present the 6We used the debert model with baseline rescaling, which correlates better with human judgment, based on the authors' suggestions. word/sentence stats for the generated summaries for reference. As we can see, the proposed system performs much better than the corresponding baselines on all metrics by exploiting those effective salient features. Based on the *ablation study*, without the salient representation learning with the | Source | Target | Source → Target | | | | | | | | | | |----------|-----------------|-------------------|----------|----------|--------|-------|-------|----------|---------|-------------|----------------------| | Domain | Dataset | Samples | Docs | Words | Sents | Words | Sents | Coverage | Density | Compression | Ref | | LEGAL | MULTI-LEXSUMD→T | 1,603 | 10.7 | 125437.7 | 4591.0 | 24.7 | 1.4 | 0.85 | 1.88 | 5182.77 | (Shen et al., 2022b) | | D→S | 3,138 | 10.3 | 105255.2 | 3889.4 | 130.2 | 4.6 | 0.92 | 2.89 | 775.26 | | | | D→L | 4,539 | 8.8 | 79632.5 | 2930.1 | 649.6 | 23.2 | 0.89 | 3.60 | 91.05 | | | Table 5: The dataset statistics on the legal domain, including document/sentence/word counts and the source-totarget coverage/density/compression ratios. | Test | Lexical | Semantic | Fluency | Pred Summary | | | | | | | |---------|-----------------|------------|-----------|----------------|---------|---------|--------|-------|-------|-------| | Domain | Dataset | Samples | Method | R-1f1 ↑ | R-2f1 ↑ | R-Lf1 ↑ | BSf1 ↑ | PP↓ | Words | Sents | | LEGAL | MULTI-LEXSUMD→T | 312 | BART | 25.39 | 8.18 | 20.79 | 29.50 | 20.36 | 27.6 | 1.4 | | PRIMERA | 30.83 | 12.13 | 25.03 | 34.74 | 15.45 | 30.8 | 1.7 | | | | | MHS-SRL | 31.13 | 12.52 | 25.64 | 36.15 | 20.33 | 25.2 | 1.6 | | | | | D→S | 616 | BART | 43.60 | 20.96 | 30.32 | 37.20 | 7.80 | 100.7 | 3.8 | | | PRIMERA | 46.88 | 23.06 | 32.50 | 41.17 | 8.77 | 102.5 | 4.0 | | | | | MHS-SRL | 47.72 | 23.56 | 32.41 | 40.76 | 8.56 | 128.0 | 4.1 | | | | | D→L | 908 | BART | 48.13 | 22.84 | 28.21 | 37.14 | 12.17 | 366.2 | 12.2 | | | PRIMERA | 53.52 | 26.57 | 31.21 | 41.70 | 12.89 | 479.4 | 17.4 | | | | | MHS-SRL | 53.25 | 27.64 | 31.26 | 42.73 | 12.15 | 479.4 | 17.4 | | | | Table 6: Empirical results on the legal domain. For each task, we underlined the numbers if the proposed system outperforms the baselines, and used the **bold-font** to highlight the best number for each metric. | Split | | | | | | |---------|-----------------|---------|-------|-------|------| | Domain | Dataset | Samples | Train | Valid | Test | | LEGAL | MULTI-LEXSUMD→T | 1,603 | 1,130 | 161 | 312 | | D→S | 3,138 | 2,210 | 312 | 616 | | | D→L | 4,539 | 3,177 | 454 | 908 | | Table 7: The dataset-split stats on the legal domain. extraction loss, our system could drop the average ROUGE score by 4.26% and the BERTScore by 3.15%. To further justify that, we conduct a followon experiment by adding such salient representation learning with the extractive loss to the BART backbone, we observed a similar performance gain, aka., uplifting the average ROUGE score by 2.49%, and the BERTScore by 1.57%. ## 3.3 Evidence Extraction Vs. Summary Generation Next, we focus on validating the extraction quality. Note that our system is able to produce a salience score for each sentence besides an abstractive summary. Hence, we conduct a qualitative evaluation by comparing the extracted evidence with the generated summary. From the test set, we randomly pick some examples for manual verification and present the results in Table 4. Note that the gold summary contains about 10 sentences on average (refer to Table 1). We then rank all the sentences based on the predicted salience scores and surface the top-10 as the evidence. To help the comparison, we also include the gold summary in Table 4. As we can see, the extracted evidence is indeed well aligned with the generated summary. ## 4 Experimental Results On The Legal Domain In this section, we conduct experimental studies on the legal domain. The MULTI-LEXSUM dataset7 (Shen et al., 2022b) contains about 9K expertauthored summaries, distributed into three subsets (tiny, short, long), based on source documents of legal cases. Similarly, we present certain important stats in Table 5 and dataset-split stats in Table 7, and report the experimental results in Table 6. From Table 6, we can see that PRIMERA method works much better than the BART method due to the fact that the transformer architecture of PRIMERA is more effective to handle much longer sequences (4096 vs 1024). Besides, another benefit comes from the task-specific pretraining vs. the task-agnostic pretraining, which is employed by the BART method. For the MHS-SRL system, it could leverage the advantages of the PRIMERA method in that both methods share a similar architecture. In addition, the dedicated salient representation learning, jointly trained with multi-task losses, brings in additional benefits. Given that our system is trained 7https://github.com/multilexsum/dataset to score salience, we also quantitatively evaluated the extraction quality, which obtains the MSE score of 1.4e−4 on the MULTI-LEXSUMD→L subset, and 3.0e−4for the MULTI-LEXSUMD→S. All those empirical results validated the efficacy of joint salience extraction and summary generation. Finally, we present the stats about the model parameter size and memory consumption in Table 8, which shows our hybrid approach is very efficient, which doesn't incur too much additional computing and memory resources. | Model | #Params | Est Mem Size | |---------|-----------|----------------| | BART | 406 M | 813 MB | | PRIMERA | 447 M | 894 MB | | MHS-SRL | 450 M | 901 MB | Table 8: The stats of model parameters. ## 5 Conclusion In this paper, we proposed to simultaneously perform summary generation and salient evidence extraction in a consistent way, where extracted salient representations are directly employed to enhance abstractive summary paraphrasing. To validate the efficacy, we conducted thorough experimental studies over multiple datasets across different domains. Both quantitative and qualitative results reveal the effectiveness of this novel summarization system. ## Acknowledgements We would like to thank all reviewers for their constructive comments and suggestions, which helped us to improve the quality of this manuscript. We sincerely appreciate their time and efforts. ## References Reinald Kim Amplayo and Mirella Lapata. 2021. Informative and controllable opinion summarization. In EACL. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: the long-document transformer. *arXiv*. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020. Few-shot learning for opinion summarization. In *EMNLP*. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2021. Learning opinion summarizers by selecting informative reviews. In *EMNLP*. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving multi-document summarization via text classification. In *AAAI*. Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015a. Ranking with recursive neural networks and its application to multi-document summarization. In *AAAI*. Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and Houfeng Wang. 2015b. Learning Summary Prior Representation for Extractive Summarization. In *ACL-IJCNLP*. Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In *SIGIR*. Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, and Lucy Lu Wang. 2021. MSˆ2: A dataset for multi-document summarization of medical studies. In *EMNLP*. Ori Ernst, Avi Caciularu, Ori Shapira, Ramakanth Pasunuru, Mohit Bansal, Jacob Goldberger, and Ido Dagan. 2022. Proposition-Level Clustering for Multi-Document Summarization. In *NAACL*. Alexander R. Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. MultiNews: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model. In ACL. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. In *NAACL*. Xiaotao Gu, Yuning Mao, Jiawei Han, Jialu Liu, You Wu, Cong Yu, Daniel Finnie, Hongkun Yu, Jiaqi Zhai, and Nicholas Zukoski. 2020. Generating Representative Headlines for News Stories. In WWW. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). *arXiv*. F. Jelinek, R. L. Mercer, L. R. Bahl, and J. K. Baker. 1977. Perplexity—a measure of the difficulty of speech recognition tasks. *The Journal of the Acoustical Society of America*, 62(S1):S63. Nayeon Lee, Yejin Bang, Tiezheng Yu, Andrea Madotto, and Pascale Fung. 2022. NeuS: Neutral multi-news summarization for mitigating framing bias. In *NAACL*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In *Text Summarization Branches Out*. Yixin Liu and Pengfei Liu. 2021. SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization. In *ACL-IJCNLP*. Yao Lu, Yue Dong, and Laurent Charlin. 2020. MultiXScience: A Large-scale Dataset for Extreme Multidocument Summarization of Scientific Articles. In EMNLP. Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, and Jiawei Han. 2020. Multi-document summarization with maximal marginal relevance-guided reinforcement learning. In *EMNLP*. Dmytro Mishkin and Jiri Matas. 2016. All you need is a good init. In *ICLR*. Feng Nan, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Kathleen McKeown, Ramesh Nallapati, Dejiao Zhang, Zhiguo Wang, Andrew O. Arnold, and Bing Xiang. 2021a. Improving factual consistency of abstractive summarization via question answering. In *ACL-IJCNLP*. Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021b. Entity-level factual consistency of abstractive text summarization. In *EACL*. Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In *EMNLP*. Richard Yuanzhe Pang, Adam D. Lelkes, Vinh Q. Tran, and Cong Yu. 2021. AgreeSum: Agreementoriented multi-document summarization. In *Findings of the ACL-IJCNLP*. Jacob Parnell, Inigo Jauregi Unanue, and Massimo Piccardi. 2022. A multi-document coverage reward for relaxed multi-document summarization. In ACL. Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, and Markus Dreyer. 2021. Efficiently Summarizing Text and Graph Encodings of MultiDocument Clusters. In *NAACL*. Xutan Peng, Yi Zheng, Chenghua Lin, and Advaith Siddharthan. 2021. Summarising Historical Text in Modern Languages. In *EACL*. Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. 2022a. MReD: a meta-review dataset for structure-controllable text generation. In Findings of the ACL. Zejiang Shen, Kyle Lo, Lauren Yu, Nathan Dahlberg, Margo Schlanger, and Doug Downey. 2022b. MultiLexSum: real-world summaries of civil rights lawsuits at multiple granularities. In *NeurIPS*. Yun-Zhu Song, Yi-Syuan Chen, and Hong-Han Shuai. 2022. Improving Multi-Document Summarization through Referenced Flexible Extraction with CreditAwareness. In *NAACL*. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *NIPS*. Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous Graph Neural Networks for Extractive Document Summarization. In ACL. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. In ACL. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based Neural Multi-Document Summarization. In *CoNLL*. Wenpeng Yin and Yulong Pei. 2015. Optimizing sentence modeling and selection for document summarization. In *IJCAI*. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In *ICML*. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. BERTScore: Evaluating Text Generation with BERT. In *ICLR*. Yong Zhang, Meng Joo Er, Rui Zhao, and Mahardhika Pratama. 2017. Multiview Convolutional Neural Networks for Multidocument Extractive Summarization. *IEEE Transactions on Cybernetics*, 47(10):3230–3242. Guo-Bing Zhou, Jianxin Wu, Chen-Lin Zhang, and ZhiHua Zhou. 2016. Minimal gated unit for recurrent neural networks. *International Journal of Automation and Computing*, 13:226–234. ## A More Examples On The Multi-News Dataset In Table 9 and Table 10, we provide additional examples. | Gold Summary | - Madonna doesn't mind criticism of her new movie, WE—a good thing, since it bombed at the Venice Film Festival and, Reuters notes, subsequently received a one-star review in the Guardian—as long as that criticism is directed at the movie, and not at the Material Girl. "I can tell when people are reviewing my film and when they're reviewing me personally," Madonna said yesterday at the Toronto International Film Festival, where WE is screening. "So when they stick to the film, then I do care" what critics think, she explained. She also described King Edward VIII, whose romance with Wallis Simpson is depicted in the film, as "very punk rock," the Telegraph reports. That's why she put the Sex Pistols' "God Save the Queen" on the soundtrack, she explained: "I thought [the king] was quite rebellious and cutting edge in his point of view about life and about how to run the empire and using the Sex Pistols was a perfect marriage." (She also, the Telegraph adds, broke into song at one point during the Q&A session—watch at left.) Click for another story coming out of the film festival—apparently, Madge doesn´t like to be looked at by volunteers... | |--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Generated Summary | - Madonna's new film, W.E., about King Edward VIII and his affair with Wallis Simpson, got a standing ovation at its premiere at the Toronto International Film Festival last night, and the Queen of Pop didn't mince words when talking about how she felt about the "very punk rock" King Edward. "I thought he was quite rebellious and cutting edge in his point of view about life and about how to run the empire and using the Sex Pistols was a perfect marriage," she said, according to the Guardian. The song "God Save the Queen" from the punk band was on the soundtrack for the film, Reuters notes, and Madonna also said she could sympathize with the late King Edward because he was an outsider, an American living in London. "I empathize with Wall is. Public figures or icons are often just reduced to a sound-bite, just a handful of attributes. I think that's why I feel so strongly about Wallis," she said. The film, which debuted at the Venice Film Festival earlier this month, has received some negative reviews, but Madonna says she doesn't mind, so long as they're directed at the film and not at her, the Guardian reports. " I can tell when people are reviewing my film and when they're reviewing me personally. I welcome criticismsof my film when it's viewed as an artistic form and not when people were mentioning things about my personal life or my achievements in any other field because they're irrelevant to the film. So when they stick to the movie then I do care, I pay attention to it." | | Extracted Evidence | "I welcome criticisms of my film when it's viewed as an artistic form and not when people are mentioning things about my personal life or my achievements in any other field because they're irrelevant to the film. (0.3320) Madonna said: "I thought he (King Edward VIII) was quite rebellious and cutting edge in his point of view about life and about how to run the empire and using the Sex Pistols was a perfect marriage." (0.2817) She had a wonderful time at the festival and was especially delighted that she got to spend so much time with her fans in front of the theatre which is a famous tradition at the Festival ... We are still trying to figure out who and why anyone would ask the volunteers to turn away from Madonna. (0.1724) "I can tell when people are reviewing my film and when they´re reviewing me personally," Madonna said when asked whether she cared about what critics thought. (0.1604) In Venice, where the film premiere earlier this month, the Queen of Pop said there were "elements of myself" in the film, and said she could sympathise with Wallis as an outsider, an American living in London. (0.1387) "I can tell when people are reviewing my film and when they're reviewing me personally," she said. (0.1290) Madonna described King Edward VIII as "very punk rock," at the North American opening of her film W.E about his infamous romance with American divorcee Wallis Simpson - and his subsequent abdication. (0.1199) Speaking at the Toronto Film Festival, the singer said she wanted reviewers to focus on her work as a director - not on her personal life. (0.1144) The singer-turned-film-director was explaining how an anarchist anthem by punk band The Sex Pistols ended up on the soundtrack for her second directorial work, starring British actors James D'Arcy and Andrea Riseborough. (0.1059) TORONTO (Reuters) - Madonna on Monday said she said she doesn't mind any criticism of her filmmaking abilities, so long as it is directed at her movie and not at herself. (0.1018) Table 9: Examples on the MULTI-NEWS dataset (2). | | Gold Summary | - A British man who hacked his way into Pentagon computers looking for information on UFOs has lost his final appeal against extradition. Gary McKinnon, who suffers from Asperger's syndrome, is now expected to be sent to the US for trial within weeks, the Independent reports. The 43-year-old Londoner could face up to 60 years in jail if found guilty of what US authorities call the biggest military hack in history. Opposition lawmakers and McKinnon's supporters slammed the decision to reject the appeal, citing medical reports that said deportation was likely to make him suicidal. "To force a peaceful, vulnerable, misguided UFO fanatic like Gary thousands of miles away from his much-needed support network is barbaric," his mother told the Telegraph. "This is a cruel and miserable decision. | |--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Generated Summary | - Gary McKinnon, the UK hacker accused of hacking into NASA and the military to find evidence of UFOs, is now "at an even higher risk of self- harm and suicide" and will kill himself in order to be extradited to the US, according to a new medical report. McKinnon has Asperger's syndrome and has a " fixed-psychological conviction he will kill myself in preference to being extradited," the report says, adding that suicide is now an "almost certain inevitability." The British government "is terrified of speaking up to America, and now they are allowing vulnerable people to be pursued for non-violent crime when they should be going after terrorists," McKinnon's mother told the Telegraph. "This is a cruel and miserable decision. to force a peaceful, vulnerable, misguided UFO fanatic like Gary thousands of miles away from his much-needed support network is barbaric. If the severity of Gary's medical condition isn't sufficient to prevent his extradition, I can't imagine what is." McKinnon could face up to 60 years in prison in the US." | | Extracted Evidence | She added: "This government is terrified of speaking up to America, and now they are allowing vulnerable people to be pursued for non-violent crime when they should be going after terrorists. (0.2463) If the severity of Gary's medical condition isn't sufficient to prevent his extradition, I can't imagine what is." (0.1838) The medical report said Mr McKinnon, who is known to suffer Asperger's syndrome, a form of autism, also has a "fixed-psychological conviction he will kill himself in preference to being extradited". (0.1696) Mrs Sharp said: "To force a peaceful, vulnerable, misguided UFO fanatic like Gary thousands of miles away from his much-needed support network is barbaric. (0.1637) But following the failure of Mr McKinnon's bid to have his case heard at the Supreme Court, he concluded that he was now "at an even higher risk of self harm and suicide", something he said was a "real probability". (0.1603) Extracts of Prof Turk's report, disclosed in the Daily Mail, show that he recorded in September that Mr McKinnon, 43, suffered from a "'very serious Major Depressive Disorder aggravated and complicated by anxiety and panic attacks with multiple psychosomatic symptoms on a background of his having Asperger's syndrome". (0.1517) The family said last night that Mr McKinnon, who could be sentenced to up to 60 years in prison in the US, was "at risk of suicide" after being told there will be no 11th hour reprieve. (0.1470) To force a peaceful, vulnerable, misguided UFO fanatic like Gary thousands of miles away from his much-needed support network is barbaric. (0.1392) In July Mr McKinnon went to the High Court in an attempt to get the extradition order overturned but was told being sent for trial in the US was "a lawful and proportionate response" to his actions. (0.1350) His mother, Janis Sharp, was "extremely worried" about her son's mental state and said the Government and Mr Johnson should "hang their heads in shame" for caving in to American pressure. (0.1307) Table 10: Examples on the MULTI-NEWS dataset (3). |
qi-etal-2023-safer
{S}a{FER}: A Robust and Efficient Framework for Fine-tuning {BERT}-based Classifier with Noisy Labels
https://aclanthology.org/2023.acl-industry.38
Learning on noisy datasets is a challenging problem when pre-trained language models are applied to real-world text classification tasks. In numerous industrial applications, acquiring task-specific datasets with 100{\%} accurate labels is difficult, thus many datasets are accompanied by label noise at different levels. Previous work has shown that existing noise-handling methods could not improve the peak performance of BERT on noisy datasets, and might even deteriorate it. In this paper, we propose SaFER, a robust and efficient fine-tuning framework for BERT-based text classifiers, combating label noises without access to any clean data for training or validation. Utilizing a label-agnostic early-stopping strategy and self-supervised learning, our proposed framework achieves superior performance in terms of both accuracy and speed on multiple text classification benchmarks. The trained model is finally fully deployed in several industrial biomedical literature mining tasks and demonstrates high effectiveness and efficiency.
# Safer: A Robust And Efficient Framework For Fine-Tuning Bert-Based Classifier With Noisy Labels Zhenting Qi∗ 1 Xiaoyu Tan∗ 2† Chao Qu2 Yinghui Xu3 **Yuan Qi**3 1Zhejiang University 2INF Technology (Shanghai) Co., Ltd. 3 Fudan University Zhenting.19@intl.zju.edu.cn, yulin.txy@inftech.ai ## Abstract Learning on noisy datasets is a challenging problem when pre-trained language models are applied to real-world text classification tasks. In numerous industrial applications, acquiring task-specific datasets with 100% accurate labels is difficult, thus many datasets are accompanied by label noise at different levels. Previous work has shown that existing noisehandling methods could not improve the peak performance of BERT on noisy datasets, and might even deteriorate it. In this paper, we propose SaFER1, a robust and efficient finetuning framework for BERT-based text classifiers, combating label noises without access to any clean data for training or validation. Utilizing a label-agnostic early-stopping strategy and self-supervised learning, our proposed framework achieves superior performance in terms of both accuracy and speed on multiple text classification benchmarks. The trained model is finally fully deployed in several industrial biomedical literature mining tasks and demonstrates high effectiveness and efficiency. ## 1 Introduction Large Language Models (LLMs) have dominated Natural Language Processing (NLP) in recent years and achieved state-of-the-art performance in a variety of industrial applications. Among them, the most widely-adapted LLMs are transformer-based models, including BERT, T5, GPT, etc (Devlin et al., 2018; Raffel et al., 2020; Brown et al., 2020). LLMs learn general natural language knowledge from large corpora and the representations for text data can be utilized in various downstream NLP tasks. Such a paradigm is also extensively implemented in both industrial and research domains and achieves considerable performance improvement compared with traditional statistical approaches. ∗Equal Contributions. †Corresponding author. 1Code will be released at GitHub. Text classification is one of the most important tasks in the industrial domain (Sanchez-Pi et al., 2014; Han and Akbari, 2018; Wei et al., 2018; Arslan et al., 2021; Chen et al., 2018; Cheng et al., 2021), including sentence classification, named entity recognition, etc. Typically, these tasks can be accomplished by leveraging embeddings generated by the encoder architecture of LLMs. Unfortunately, the performance is always limited by the data quality in either pre-training or fine-tuning stage. Real-world datasets, especially those collected for industrial applications, usually contain a substantial proportion of mislabeled data (Song et al., 2022). Such label noise can be induced by crowd-sourcing, human mistakes, system errors, or the uncertainty itself in the weakly-supervised labeling methodology. The corrupted labels can dramatically influence the model performance and robustness, which has been validated theoretically and experimentally (Song et al., 2022; Zhu et al., 2022b). Worse still, re-labeling procedures can be cost-intensive and time-consuming due to lack of domain experts. That means in most cases we can only access the noisy validation set and lose the validation with ground truth. Previous work addressed the label noise issue by proposing robust loss functions, recovering the noise transition matrix, and incorporating unsupervised learning strategies (Jindal et al., 2019; Yao et al., 2020; Lukasik et al., 2020; Jenni and Favaro, 2018; Wei et al., 2020; Tan et al., 2021). However, the label noise issue of using LLMs in NLP tasks is still an open problem and remains unsolved, especially in text classification tasks. Zhu et al. (2022b) demonstrate that directly incorporating existing noise-encountering methods cannot consistently improve and even deteriorate the BERT model performance under noisy labels in text classification. This conclusion is also supported by the results of our industrial implementation and ablation study (Appendix C). By investigating the process of fine390 tuning LLMs on noisy sets, we observe that the model is experiencing a "convex" learning curve under label noise (Appendix C Figure 3): the model gradually increases the accuracy by learning easy samples in the earlier stage, but continuously experiences performance drop by over-fitting the noise labels. Hence, can we reserve the knowledge of the first stage and mitigate the over-fitting in the second stage without using any clean data validation? Following this intuition, we propose SaFER: a noiSe-resistant Framework for Efficient and Robust BERT fine-tuning to perform robust LLMs fine-tuning under noisy classification NLP tasks, without using any clean labels. Generally, this framework is compatible with any LLM that contains an encoder architecture to encode sequences into latent representations. We first fine-tune the model following a typical manner but perform early stopping with a label-agnostic strategy. Then, we leverage contrastive learning with an NLP-specific augmentation strategy and implement structural learning to further combat noisy labels. To evaluate our proposed framework, we perform experiments on text classification tasks using pre-trained BERT models (Devlin et al., 2018). Here, we select the BERT family to represent LLMs due to their widespread recognition in both the industry and research domains. We implement several state-of-the-art robust learning methods against label noise as our baseline methods (Appendix B.1). These methods are designed to mitigate the general classification label noise issue without explicitly considering the usage of LLMs. The experiment results show the consistent and superior performance of our proposed learning framework. Finally, we implement SaFER in two industrial biomedical literature mining tasks under unavoidable human labeling noise and achieve robust practical performance compared with baselines. The main contributions are as follows: - We propose an efficient and robust learning framework: SaFER, for BERT fine-tuning on datasets with noisy labels without accessing any clean data. - We empirically show that SaFER achieves superior performance on text classification tasks using BERT. - We demonstrate the practical feasibility of SaFER on two industrial biomedical literature mining tasks. ## 2 Methods 2.1 Problem Settings We focus on the classification task on text data. Suppose X ⊂ R dis the d dimensional input space and Y = {0, 1} kis the label space in a one-hot manner. In a typical text classification task, a clean training corpus C = {(xi, yi) n i=1} drawn from the joint distribution X × Y is provided, where xi denotes a data sample, yiis the corresponding groundtruth label of xi, and n is the size of the corpus. However, in the noisy label setting, a certain proportion of the training data are not correctly labeled. Given a noisy training corpus Cb = {(xi, yˆi) n i=1} drawn from a noisy joint distribution X × Yˆ , with noise level being ρ = |{(xi, yˆi)|yˆi ̸= yi}|/n, we hope to train a classifier f(·; θ) that gives correct predictions on unseen data. Some noise handling methods assume that a small clean set is available (Tänzer et al., 2021; Shu et al., 2019). However, such clean data is often not easy to obtain in real-world industrial settings. In our problem, we assume that there is no fully clean data available. In the subsequent sections, by saying "noisy" we mean there is a non-zero probability that such data item is wrongly labeled. ## 2.2 Safer 2.2.1 Motivation We identify two problems caused by noisy labels: 1) early stopping at the wrong training step on noisy validation sets, therefore one would miss the best model parameters, and 2) noisy supervision from incorrect labels, thus preventing the model from improving its performance or even deteriorating it. Improper Early Stopping Traditional strategy (Tänzer et al., 2021) relies on a clean validation set to find the point where the model reaches its highest generalization capability. If a model reaches a high performance on the validation set at some training step, and such performance is not exceeded in a certain amount of further steps, then the model is early stopped. In our settings, nevertheless, such a validation set is not clean and the performance on such a set may not be a reliable indicator of early stopping, as shown in Appendix C Figure 4. Moreover, evaluating the model for every few steps is very time-consuming, especially when it comes to doing inference with large language models with a huge amount of parameters. Noisy Supervision Under label noises, the optimization with loss function L on a noisy batch CbB 391 ![2_image_0.png](2_image_0.png) sampled from the noisy dataset Cb at time step t is formulated as: $$\theta_{t+1}=\theta_{t}-\eta\nabla\left(\frac{1}{|\widehat C_{B}|}\sum_{(x,\hat{y})\in\widehat C_{B}}\mathbf{L}(f(x;\theta_{t}),\hat{y})\right),\tag{1}$$ where the noisy label yˆ participates in the loss calculation and can corrupt the model parameters through backward propagation, thus misleading the optimization progress. ## 2.2.2 Label-Independent Early Stopping To find the point where the generalization reaches the best performance, we need a reliable signal that indicates when the classifier starts to overfit the noisy labels. Intrinsic Dimensionality (ID) is a measure of the number of variables needed to minimally represent a set of data, and it has been proven to be a good indicator of the generalization ability of DNNs (Amsaleg et al., 2017; Ma et al., 2018b,a; Ansuini et al., 2019; Nakada and Imaizumi, 2020; Birdal et al., 2021). When considering each sample in the dataset, we have Local Intrinsic Dimensionality (LID) that measures the dimensional complexity of the local subspace in its vicinity. Previous studies on DNN learning dynamics empirically show that the ID curve behaves like a concave shape at the beginning of the training, which applies to several types of DNNs. When LID reaches a low point, the DNN could reach high generalization ability before starting to memorize label noises. We show that the BERT encoder also follows such a manner (Appendix C Figure 2) and propose to utilize such characteristics to find the proper stopping point. Following Ma et al. (2018b), we estimate the LID score within batches. Consider a BERT-based classifier h : P → R k where k is the number of classes. Given a transformed text batch XB ⊂ X sampled from the training corpus, and a reference point x ∼ P, the estimated LID score of x can be calculated as: $$\widehat{\mathrm{LID}}(x,X_{B})=$$ − 1 k X k i=1 log ri(g(x), g(XB)) rmax(g(x), g(XB))!−1, (2) where g is the output from the second-to-last layer of BERT, ri(g(x), g(XB)) is the distance of g(x) to its i-th nearest neighbor in the batch, and rmax(g(x), g(XB)) is the maximum distance of g(x) to its neighbors in the batch (or the radius of the batch, centered on x). Using the estimated LID score as a stopping indicator, we can quickly warm up the classifier without any noise-handling modules, as shown in stage 1 of Algorithm 1 in Appendix D. As for why we do not apply any noise handling at the first stage, Zhu et al. (2022b) has shown in Figure 1 of their paper that the pure BERT method is the fastest one to reach its peak performance compared with other methods that add extra modules to BERT, and such peak performance is already very high, thus being a good start. ## 2.2.3 Noise-Tolerant Supervisor To further improve the model's performance after warming up, we introduce unsupervised learning to apply label-independent supervision, thus preventing label noises from misleading the optimization. The second stage incorporates a simple multi-layer perceptron (MLP) as a projection head, which is trained along with the classifier and simultaneously imposes constraints on the representation space of the classifier. Given a noisy batch CbB = {(xi, yˆi) b i=1} drawn from Cb, the BERT encoder f with parameters θ1 takes the tokenized sentence xi as input and produces the sentence embedding zi = f(xi; θ1). The classification head g with parameters θ2 transforms ziinto a k-dimensional vector ui = g(zi; θ2) (k is the number of classes), which is supervised by noisy labels with cross-entropy loss: $$\begin{split}\mathbf{L}_{CE}(u_{i},\hat{y_{i}})&=-\sum_{j=1}^{k}\hat{y_{i}}^{(j)}\log(u_{i}^{(j)}),\\ \mathcal{L}_{CE}(\{u_{i}\}_{i=1}^{b},\{\hat{y_{i}}\}_{i=1}^{b})&=\sum_{i=1}^{b}\mathbf{L}_{CE}(u_{i},\hat{y_{i}}).\end{split}\tag{3}$$ On the other hand, we train a projection head h with parameters θ3 using contrastive learning. Borrowing experience from the unsupervised SimCSE framework (Gao et al., 2021), we forward passed xithrough f and g twice and got two representations vi = h(f(xi, m; θ1); θ3) and v′i = h(f(xi, m′; θ1); θ3) as positive pairs, where m and m′are two random dropout masks of BERT, while representations for other sentences are treated as negative samples. The reason why we adopted SimCSE is that it is an extremely simple but efficient way to build positive pairs for sentences, and it has been proven to perform much better in sentence representation than other traditional NLP data augmentations such as crop and word deletion. Using the output representations, we calculate the standard InfoNCE loss (Oord et al., 2018): $$\begin{split}\mathbf{L}_{con}(v_{i})&=-\log\frac{\exp(S(v_{i},v_{i}^{\prime})/\tau)}{\sum_{j=1}^{b}\exp(S(v_{i},v_{j}^{\prime})/\tau)},\\ \mathcal{L}_{con}(\{v_{i}\}_{i=1}^{b})&=\sum_{i=1}^{b}\mathbf{L}_{con}(v_{i}),\end{split}\tag{4}$$ where S is a measurement of similarity between representation vectors and τ is a temperature hyperparameter. As for why we need such a projection head h, the reason is that it prevents noisy labels from corrupting the classifier's representation space by forcing the classification head g to "agree" with its output. This is realized by applying a structural loss (Tan et al., 2021) to h and g. Minimizing the KL-divergence between the similarities of classifier outputs and those of projector outputs, structural loss applies a structure-preserving constraint on output features of g, keeping its representation space structure similar to that of h: $$\mathcal{L}_{str}(\{u_{i}\}_{i=1}^{b},\{v_{i}\}_{i=1}^{b})=$$ $$\sum_{p\neq q}R(v_{p},v_{q})\log\frac{R(v_{p},v_{q})}{R(u_{p},u_{q})},\tag{5}$$ where R is a similarity metric. Notice that only the classification head is trained with structural loss, while BERT and the projection head are frozen. ## 2.2.4 Two-Stage Fine-Tuning Framework We design a two-stage framework for fine-tuning models. For the first stage, a classifier with pretrained BERT as the backbone is fine-tuned on the noisy set without any noise-handling methods, but monitored by a reliable early stopper. After the early stopping is triggered, we enter the second stage where a projection head is trained along with the classifier using an unsupervised learning method, and it simultaneously applies regularization to the representation space of the classifier. With a strong knowledge base built at the first stage and further boosting at the second stage, the classifier can reach a high generalization capability quickly. The entire SaFER framework is illustrated in Figure 1 and Algorithm 1 in Appendix D. ## 3 Experiment 3.1 Implementation Injected Label Noise Following previous work on modeling noisy datasets (Reed et al., 2014; Van Rooyen et al., 2015), we define two types of synthetic label noise: the single-flip noise $$\mathcal{P}({\hat{y}}=j|y=i)={\begin{cases}1-\rho,\;{\mathrm{for}}\;i=j\\ \rho,\qquad{\mathrm{for~one}}\;i\neq j\\ 0,\qquad{\mathrm{else}}\end{cases}}$$ and the uniform-flip noise $${\mathcal{P}}({\hat{y}}=j|y=i)={\begin{cases}1-\rho,\ {\mathrm{for}}\ i=j\\ {\frac{\rho}{k-1}},\ {\mathrm{else}}\end{cases}}$$ According to statistics shown in the survey done by Song et al. (2022), we define four levels of injected noises: low (ρ = 0.2), medium (ρ = 0.3), high (ρ = 0.4), and extreme (ρ = 0.45). Models We use the pre-trained BERT-base model from Huggingface as the pre-trained BERT backbone, and the BERT fine-tuning strategy follows Devlin et al. (2018). The classifier head is a linear layer with input size being the hidden size of the BERT backbone and output size being the number of classes. The projection head is a two-layer perceptron in which the input size is the hidden size of the BERT backbone, the intermediate size is 512, and the output size is 128 (i.e., the feature dimension of the projection). Baselines We compare SaFER with the following methods (implementations were adapted from code provided by Zhu et al. (2022b)): Without Noise-Handling, Noise Matrix, Noise Matrix with Regularization, Label Smoothing, Robust Loss, CoTeaching, and Co-Learning. Note that all of them use a noisy validation set for early stopping. We refer readers to Appendix B.1 for more details. Environment The model is fine-tuned by single NVIDIA Tesla V100-32G GPU under PyTorch (v1.12.1) framework. We refer the readers to Appendix B.2 for more implementation details. ## 3.2 Text Classification With Bert IMDB (Maas et al., 2011) is a dataset for binary sentiment classification containing around 50K movie reviews, most commonly used for sentiment analysis, i.e. models should predict "positive" or "negative" for the reviews. We use a set of 25K reviews for training/validation, and 25K for testing. Following Zhu et al. (2022b), we inject single-flip noise into the IMDB dataset. AG-News (Zhang et al., 2015) is a sub-dataset of AG's corpus of more than 1 million news articles gathered from more than 2K news sources, having the 4 largest classes ("World", "Sports", "Business", "Sci/Tech") of AG's Corpus. The AGNews contains 30K training samples and 1,900 test samples for each class. Following Zhu et al. (2022b), we inject uniform-flip noise into the AGNews dataset. Before injecting label noises, we assume that IMDB and AG-News themselves are 100% clean. Their test splits remain the same, while the training/validation splits are modified by the aforementioned injected noises. Table 1 and 2 show the experiment results: SaFER performs the best across all noise levels on AG-News, and also achieves state-of-the-art performance on IMDB, except that at medium noise level, it reaches comparable results with Co-Learning. When the noise level is low or medium, all methods maintain good performance. But when it comes to higher noise, all the baselines showed varying degrees of decline in accuracy, while SaFER still maintains high performance thanks to the feature-dependent information gained from unsupervised learning. Especially, we notice that on extremely noisy IMDB, Co-Teaching stops at the wrong training step where the accuracy is just 52.53% because the validation set is so noisy that the highest validation accuracy does not match the highest test accuracy (above 70%). But SaFER uses label-independent method for early stopping, thus avoiding such a problem. Notably, SaFER is the only method that maintains accuracy above 90% on AG-News across all five levels of noise. Regarding efficiency, SaFER has also shown superior results compared with other baseline methods. Noise Matrix, Label Smoothing, and Robust Loss do not differ much from pure BERT because they introduce only limited extra computations. But Co-Teaching needs to maintain two neural networks, thus being very slow during backward propagation, and Co-Learning trains the BERT backbone twice: once with the classifier and once with the projector, therefore it is also very inefficient. However, SaFER uses pure BERT for the first stage, which largely cut down the training time. Most importantly, the LID-based early stopping strategy does not require inference on a validation set, thus saving much time at each evaluation step. As shown in the table, SaFER only takes around half of the time per training step that is required by Co-Teaching and Co-Learning. ## 3.3 Ablation Study To verify the effectiveness of using two-stage, we compare SaFER with one-stage pure BERT and one-stage BERT with unsupervised learning. The test accuracy v.s. training step on IMDB is shown in Appendix C Figure 3. Note that the two-stage scheme actually uses pure BERT for stage one and next uses BERT with unsupervised learning for stage two. As we can see, pure BERT climbs up very fast at the initial 500 steps, while BERT with unsupervised learning cannot reach the same accuracy until its 1500th step. However, pure BERT's accuracy starts to drop after it reaches maximum performance, while BERT with unsupervised learning continues going up. SaFER combines their advantages: the accuracy quickly gets to a high point and continues climbing with a stable pace, therefore its curve is at the highest place. Early stopping for stage one is used to find the transition point from pure BERT to BERT with unsupervised learning. We also studied whether to apply early stopping to stage two to find the converging point of the classifier, as shown in Appendix C Table 4. It can be seen that with early stopping at stage 2, SaFER reaches an accuracy of 89.06%, which is much higher than pure BERT's | Methods | ρ = 0.0 | ρ = 0.2 | ρ = 0.3 | ρ = 0.4 | ρ = 0.45 | time/step | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------|-----------|------------|-------------| | Without Noise-Handling | 93.36 | 91.08 | 90.96 | 86.72 | 77.76 | 5.43s | | Noise Matrix | 93.18 | 91.19 | 90.30 | 87.37 | 81.45 | 5.49s | | Noise Matrix with Regularization | 93.29 | 91.40 | 90.71 | 88.16 | 79.26 | 5.42s | | Label Smoothing | 93.34 | 91.49 | 90.10 | 86.42 | 73.72 | 5.48s | | Robust Loss: MAE | 91.98 | 88.74 | 85.98 | 78.66 | 73.93 | 5.46s | | Robust Loss: SCE | 93.26 | 88.75 | 85.74 | 83.92 | 76.21 | 5.89s | | Co-Teaching | 93.42 | 90.98 | 90.96 | 84.84 | 52.53 | 5.94s | | Co-Learning | 93.56 | 91.83 | 91.46 | 86.76 | 78.37 | 6.15s | | SaFER | 93.73 | 92.64 | 91.27 | 89.06 | 82.48 | 3.31s | | Table 1: Comparing accuracy(%) with SOTA methods on IMDB. ρ stands for noise level, and time/step is the average time needed for each training step (including the necessary time for validation or LID score calculation). Methods ρ = 0.0 ρ = 0.2 ρ = 0.3 ρ = 0.4 ρ = 0.45 time/step Without Noise-Handling 91.35 90.76 88.63 85.32 87.34 3.82s Noise Matrix 92.93 89.00 88.67 85.34 83.27 3.97s Noise Matrix with Regularization 90.72 89.23 88.52 85.73 84.60 3.84s Label Smoothing 91.35 89.97 89.72 90.12 88.53 3.90s Robust Loss: MAE 90.18 90.01 90.12 89.03 87.89 3.92s Robust Loss: SCE 92.98 90.13 88.71 89.38 88.80 4.03s Co-Teaching 91.25 89.89 87.84 87.02 86.37 4.40s Co-Learning 91.82 90.78 89.82 89.86 88.30 4.52s SaFER 93.07 92.22 91.66 91.13 90.92 2.16s | | | | | | | 84.49% and BERT with unsupervised learning's 77.33% and even better than fine-tuning BERT with unsupervised learning till the end (88.59%). ## 3.4 Biomedical Literature Mining We further deploy our framework on two industrial biomedical literature mining tasks. These tasks are binary classification tasks used to recognize special biomedical phrases in the literature to assist our medication and biomedical experts in patents and literature reading. The data is acquired from several experts in daily work who have different technology stacks. The data is labeled by experts themselves or organized from the web resource in daily work. Hence, the data itself is highly corrupted by label noise due to crowd-sourcing and labeling preference. Unfortunately, unifying the label standard and relabeling all data is impossible due to the high workload of our experts and the large quantity of data: both tasks share the same data space which has around 40K data with an average of 60 text lengths. To evaluate our proposed method, we invite one human expert to examine and relabel part of the dataset which contains 2K data, and suppose that this part of the data is clean. We use this part of the data as a test set for model evaluation and find that the label noise level for both tasks is around ρ = 0.3. We shuffle and split the remaining noisy data by 20% and 80% for validation and training, and fine-tune the original BERT model in a typical training manner and our proposed SaFER framework, separately. Experiment results are listed in Table 3, showing the practical effectiveness of SaFER in industrial settings. We deploy our trained model as a new online service in our company to assist biomedical researchers in literary readings. ## 4 Conclusion | Methods | Task 1 | Task 2 | |-----------------------------------------------------|----------|----------| | BERT w/o noise-handling | 75.24 | 91.02 | | SaFER | 80.03 | 94.75 | | Table 3: Accuracy (%) for two industrial biomedical | | | We propose a novel framework SaFER to perform robust and efficient BERT fine-tuning in text classification tasks under label noises. This framework is evaluated on both open-source datasets with synthetic label noise and industrial tasks with human label noise, compared with several state-of-the-art noise handling methods. Experiments show that SaFER not only achieves superior results but also demonstrates significant improvement in efficiency. ## Limitations SaFER framework is designed for handling BERT classification label noise without using any clean data. Despite the fact that the BERT is one of the most extensively used models in the industrial domain, the influence of label noise on GPT models and prompt should be further studied in light of the recent rapid progress. We believe that our framework is compatible with these models, however, further evaluation is required. Another limitation is the types of label noise. We analyze SaFER using synthetic datasets with uniform and flip label noise which are typical classlevel noise in practice. However, in industrial applications, the model may experience instance-level label noise, which is beyond the scope of our investigation. Although SaFER achieves robust results in our biomedical literature mining task under human label noise, we encourage users to examine the label noise type first in their own application. ## Ethics Statement All experiments can be conducted on a single NVIDIA Tesla V100-32G GPU. The datasets (Maas et al., 2011; Zhang et al., 2015) used to compare SaFER with previous methods are publicly available, and we did not modify any data or labels in these datasets. The dataset used for industrial biomedical literature mining tasks is protected and we do not plan to make it public in this work. But the source code and instructions for using our framework will be released along with the paper. ## References Laurent Amsaleg, James Bailey, Dominique Barbe, Sarah Erfani, Michael E Houle, Vinh Nguyen, and Miloš Radovanovic. 2017. The vulnerability of learn- ´ ing to adversarial perturbation increases with intrinsic dimensionality. In *2017 ieee workshop on information forensics and security (wifs)*, pages 1–6. IEEE. Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan. 2019. Intrinsic dimension of data representations in deep neural networks. Advances in Neural Information Processing Systems, 32. Yusuf Arslan, Kevin Allix, Lisa Veiber, Cedric Lothritz, Tegawendé F Bissyandé, Jacques Klein, and Anne Goujon. 2021. A comparison of pre-trained language models for multi-class text classification in the financial domain. In Companion Proceedings of the Web Conference 2021, pages 260–268. Alan Joseph Bekker and Jacob Goldberger. 2016. Training deep neural-networks based on unreliable labels. In *2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, pages 2682–2686. IEEE. Tolga Birdal, Aaron Lou, Leonidas J Guibas, and Umut Simsekli. 2021. Intrinsic dimension, persistent homology and generalization in neural networks. *Advances in Neural Information Processing Systems*, 34:6776–6789. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Lingzhen Chen, Alessandro Moschitti, Giuseppe Castellucci, Andrea Favalli, and Raniero Romagnoli. 2018. Transfer learning for industrial applications of named entity recognition. In *NL4AI@ AI* IA*, pages 129– 140. Xiang Cheng, Mitchell Bowden, Bhushan Ramesh Bhange, Priyanka Goyal, Thomas Packer, and Faizan Javed. 2021. An end-to-end solution for named entity recognition in ecommerce search. In *Proceedings of* the AAAI Conference on Artificial Intelligence, volume 35, pages 15098–15106. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. *arXiv preprint arXiv:2104.08821*. Aritra Ghosh, Himanshu Kumar, and P Shanti Sastry. 2017. Robust loss functions under label noise for deep neural networks. In *Proceedings of the AAAI* conference on artificial intelligence, volume 31. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31. Jianglei Han and Mohammad Akbari. 2018. Vertical domain text classification: towards understanding it tickets using deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. 2018. Using trusted data to train deep networks on labels corrupted by severe noise. Advances in neural information processing systems, 31. Simon Jenni and Paolo Favaro. 2018. Deep bilevel learning. In Proceedings of the European conference on computer vision (ECCV), pages 618–633. Ishan Jindal, Daniel Pressel, Brian Lester, and Matthew Nokleby. 2019. An effective label noise model for dnn text classification. *arXiv preprint* arXiv:1903.07507. Himanshu Kumar, Naresh Manwani, and PS Sastry. 2020. Robust learning of multi-label classifiers under label noise. In *Proceedings of the 7th ACM IKDD* CoDS and 25th COMAD, pages 90–97. Michal Lukasik, Srinadh Bhojanapalli, Aditya Menon, and Sanjiv Kumar. 2020. Does label smoothing mitigate label noise? In *International Conference on* Machine Learning, pages 6448–6458. PMLR. Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. 2018a. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, and James Bailey. 2018b. Dimensionalitydriven learning with noisy labels. In International Conference on Machine Learning, pages 3355–3364. PMLR. Andrew Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150. Eran Malach and Shai Shalev-Shwartz. 2017. Decoupling" when to update" from" how to update". *Advances in neural information processing systems*, 30. Milad Moradi and Matthias Samwald. 2021. Evaluating the robustness of neural language models to input perturbations. *arXiv preprint arXiv:2108.12237*. Ryumei Nakada and Masaaki Imaizumi. 2020. Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. *The Journal* of Machine Learning Research, 21(1):7018–7055. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In *Proceedings of the IEEE* conference on computer vision and pattern recognition, pages 1944–1952. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*, 21(1):5485–5551. Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596. Nayat Sanchez-Pi, Luis Martí, and Ana Cristina Bicharra Garcia. 2014. Text classification techniques in oil industry applications. In International Joint Conference SOCO'13-CISIS'13- ICEUTE'13: Salamanca, Spain, September 11th-13th, 2013 Proceedings, pages 211–220. Springer. Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. 2019. Meta-weightnet: Learning an explicit mapping for sample weighting. *Advances in neural information processing systems*, 32. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. 2022. Learning from noisy labels with deep neural networks: A survey. *IEEE* Transactions on Neural Networks and Learning Systems. Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2014. Training convolutional networks with noisy labels. *arXiv* preprint arXiv:1406.2080. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Cheng Tan, Jun Xia, Lirong Wu, and Stan Z Li. 2021. Co-learning: Learning from noisy labels with selfsupervision. In *Proceedings of the 29th ACM International Conference on Multimedia*, pages 1405–1413. Michael Tänzer, Sebastian Ruder, and Marek Rei. 2021. Memorisation versus generalisation in pre-trained language models. *arXiv preprint arXiv:2105.00828*. Brendan Van Rooyen, Aditya Menon, and Robert C Williamson. 2015. Learning with symmetric label noise: The importance of being unhinged. *Advances* in neural information processing systems, 28. Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, and Jingjing Liu. 2020. Infobert: Improving robustness of language models from an information theoretic perspective. arXiv preprint arXiv:2010.02329. Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. 2019. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 322–330. Fusheng Wei, Han Qin, Shi Ye, and Haozhen Zhao. 2018. Empirical study of deep learning for text classification in legal document review. In *2018 IEEE* International Conference on Big Data (Big Data), pages 3317–3320. IEEE. Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. 2020. Combating noisy labels by agreement: A joint training method with co-regularization. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pages 13726–13735. Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, and Masashi Sugiyama. 2020. Dual t: Reducing estimation error for transition matrix in label-noise learning. *Advances in neural information processing systems*, 33:7260–7271. Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. 2019. How does disagreement help generalization against label corruption? In *International Conference on Machine* Learning, pages 7164–7173. PMLR. Shengnan Zhang, Yuexian Hou, Benyou Wang, and Dawei Song. 2017. Regularizing neural networks via retaining confident connections. *Entropy*, 19(7):313. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28. Bin Zhu, Zhaoquan Gu, Le Wang, Jinyin Chen, and Qi Xuan. 2022a. Improving robustness of language models from a geometry-aware perspective. *arXiv* preprint arXiv:2204.13309. Dawei Zhu, Michael A Hedderich, Fangzhou Zhai, David Ifeoluwa Adelani, and Dietrich Klakow. 2022b. Is bert robust to label noise? a study on learning with noisy labels in text classification. *arXiv preprint* arXiv:2204.09371. ## A Related Work In this section, we briefly review previous work on the problem of robust learning with label noises and focus on applying such methods to pre-trained language models like BERT. Noise matrix is a transition matrix added to the end of DNNs to model the underlying label transition pattern of the noisy dataset (Sukhbaatar et al., 2014; Bekker and Goldberger, 2016; Patrini et al., 2017; Hendrycks et al., 2018; Yao et al., 2020; Jindal et al., 2019). Patrini et al. (2017) proposed ForwardCorrection, which corrects wrong labels during forward propagation by multiplying the estimated noise transition matrix with the model's outputs. To obtain trustworthy noise matrices, Hendrycks et al. (2018) proposed gold loss correction that corrects loss using available trusted labels and then turns the confusion matrices of trusted labels into accurate transition matrices. Different from them, Jindal et al. (2019) trained transition matrices with an l2 regularization which is not necessarily normalized into probability matrices. However, noise matrix methods have shown large estimation errors when only noisy data is available or when the noise level is high, which may not be feasible in real-world settings. Regularization is widely used to prevent the overfitting of DNNs. Label smoothing (Szegedy et al., 2016) is such a method via softening ground truth labels by mixing the one-hot label with a uniform vector. As shown by Lukasik et al. (2020) and Zhang et al. (2017), label smoothing is an effective way to combat label noises. Jenni and Favaro (2018) proposed bilevel learning, which introduces a bilevel optimization using a clean validation dataset to regularize the overfitting of DNNs. However, the extended features introduced by regularization slow down the convergence of training, and the performance gain is very limited unless the models are deeper. Robust Loss methods re-design the loss function to mitigate the negative impacts brought by incorrect labels. Kumar et al. (2020) has mathematically defined the pre-requisite for robust loss on multi-label classification tasks. Ghosh et al. (2017) showed the mean absolute error (MAE) loss satisfies such a condition and helps models achieve better generalization ability than the traditional cross-entropy loss. Wang et al. (2019) introduced symmetric cross entropy (SCE) loss that combines a reverse cross-entropy loss with the standard cross-entropy loss, achieving higher performance than previous methods. However, robust loss methods perform well only in simple cases where data patterns are easy to learn or the number of classes is small. Co-Training is a family of methods that use two DNNs to help combat incorrect labels. Decoupling (Malach and Shalev-Shwartz, 2017) maintains two networks and updates them using instances with different predictions. Similarly, Co-teaching (Han et al., 2018) also trains two networks, but it selects small-loss data to teach the peer network, which is improved by Co-Teaching+ (Yu et al., 2019) through selecting small-loss data from only disagreement data. JoCoR (Wei et al., 2020) maintains two networks too, but it trains them together with a joint loss to maximize their agreement. However, the differences between two networks of the same architecture are very limited, especially during the later training period, so they can provide only slightly different views of the data. To solve such problems, Tan et al. (2021) proposed Co-Learning that introduced self-supervised learning to assist supervised learning of the classifier. However, the extra-introduced optimization largely slows down the convergence of the model. Language models' robustness to label noises has not been as widely studied as Computer Vision models. Several attempts (Moradi and Samwald, 2021; Zhu et al., 2022a; Wang et al., 2020) have been made to improve language models' robustness to input perturbations, but they mainly focused on noisy data instead of noisy labels. Zhu et al. (2022b) showed that for text classification tasks with modern NLP models like BERT, existing noise-handling methods, including some methods mentioned above, do not always improve its performance under noisy labels of different noise rates, and may even deteriorate it. Jindal et al. (2019) proposed a CNN-based architecture that incorporates a non-linear processing layer to model the label noise statistics. But this method changes the commonly used NLP architecture, making pre-trained language models not usable, therefore it may be not applicable in various real-world corpora. ## B Experiment Details B.1 Baseline Descriptions We compare SaFER with the following baselines: - Without Noise-Handling, which does not apply any noise-handling modules to the classifier's training. - Noise Matrix (Sukhbaatar et al., 2014), which appends a noise matrix after BERT's output to transform the clean label distribution to the noisy one. - Noise Matrix with Regularization (Jindal et al., 2019), which also appends a noise matrix after BERT's output, but the matrix is trained with l2 regularization. - Label Smoothing (Szegedy et al., 2016), which mixes each one-hot label with a uniform vector. - Robust Loss, which leverages robust loss function (Mean Absolute Error Loss (Ghosh et al., 2017)) or designs new loss function (Symmetric Cross Entropy Loss (Wang et al., 2019)). - Co-Teaching (Han et al., 2018), which trains two networks to select "clean" training subsets for each other. - Co-Learning (Tan et al., 2021), which trains a projector along with the classifier to apply constraints on the classifier's learning. The time per training step shown in Table 1 and Table 2 is calculated by averaging the total training duration across all label noise types on each dataset, including the time for model loading, training, and validation. ## B.2 Hyperparameters We set the following hyperparameters for SaFER evaluation: | Field | Value | |--------------------------------------|---------| | BERT dropout rate | 0.1 | | number of training steps (stage 1) | 5000 | | number of training steps (stage 2) | 5000 | | training batch size | 32 | | evaluation batch size | 64 | | evaluation frequency | 25 | | feature dimension (projection) | 128 | | number of batches for LID estimation | 10 | | initial LID calculation step | 5 | | LID window size | 5 | | BERT learning rate | 2e-5 | | SGD momentum | 0.9 | | SGD dampening | 0 | | SGD weight decay | 0.0005 | | SGD nesterov | True | | patience for early stopping | 25 | ## C Ablation Study Results Here, we report the ablation study results in Section 3.3. We compare SaFER with one-stage pure BERT and one-stage BERT with unsupervised learning. The results is shown in Figure 3. We studied whether to apply early stopping to stage 2 to find the converging point of the classifier. The result is shown in Table 4. We also investigate the two early stopping strategies of fine-tuning the BERT classifier in noisy sets and evaluation in clean sets. The result is shown in Figure 4. Table 4: Ablation study on the two-stage scheme. | stage 1 | stage 2 | early stop stage 2? | accuracy (%) | |-----------|-----------|-----------------------|----------------| ![11_image_1.png](11_image_1.png) ![11_image_0.png](11_image_0.png) ![12_image_0.png](12_image_0.png) ## D Algorithm We demonstrate the full algorithm of SaFER in Algorithm 1. ## Algorithm 1 Safer: Two-Stage Finetuning Input: Noisy training corpus Cb, pre-trained BERT backbone f(·; θ1), batch size b, number of stage training steps T1, T2 for stage 1 and 2 Output: Trained text classifier f · g {**STAGE 1**: Fast Warming Up} 1: Initialize classification head g(·; θ2). 2: for each t from 0 to T1 − 1 do 3: Sample a batch from Cb: x ← {xi} b i=1, yˆ ← {yˆi} b i=1 4: Obtain predictions: u ← g(f(x; θ1); θ2) 5: Update θ1, θ2 using Eq. 3. 6: Calculate batch LID scores using Eq. 2 and get the average score lidavg. 7: if lidavg reaches turning point **then** 8: Save θ1, θ2 and **break**. 9: **end if** 10: **end for** {**STAGE 2**: Combating Label Noises} 11: Initialize projection head h(·; θ3). 12: for each t from 0 to T2 − 1 do 13: Sample a batch from Cb and get a copy: x ← {xi} b i=1, yˆ ← {yˆi} b i=1; x′ ← x 14: Encode sentences with BERT: z ← f(x; θ1), z′ ← f(x′; θ1) 15: Get predictions from classifier g: u ← g(z; θ2) 16: Get projections from projector h: v ← h(z; θ3), v′ ← h(z′; θ3) 17: Update θ2 using Eq. 5. 18: Update θ1, θ2 using Eq. 3. 19: Update θ1, θ3 using Eq. 4. 20: **end for**
kim-etal-2023-chemical
Chemical Language Understanding Benchmark
https://aclanthology.org/2023.acl-industry.39
In this paper, we introduce the benchmark datasets named CLUB (Chemical Language Understanding Benchmark) to facilitate NLP research in the chemical industry. We have 4 datasets consisted of text and token classification tasks. As far as we have recognized, it is one of the first examples of chemical language understanding benchmark datasets consisted of tasks for both patent and literature articles provided by industrial organization. All the datasets are internally made by chemists from scratch. Finally, we evaluate the datasets on the various language models based on BERT and RoBERTa, and demonstrate the model performs better when the domain of the pretrained models are closer to chemistry domain. We provide baselines for our benchmark as 0.8054 in average, and we hope this benchmark is used by many researchers in both industry and academia.
# Chemical Language Understanding Benchmark Yunsoo Kim Hyuk Ko Jane Lee Hyun Young Heo Jinyoung Yang Sungsoo Lee Kyu-Hwang Lee LG Chem {kys.930303, kohyuk, janelee, hyheo11, yang.jy, ssoolee, amine}@lgchem.com 42 The benchmark datasets such as GLUE and 1 **Abstract** 43 SuperGLUE played a pivotal role in facilitating 2 In this paper, we introduce the benchmark 44 the advancement of NLP using language models 3 datasets named CLUB (Chemical 45 (Wang *et al.*, 2018 and Wang *et al.*, 2019). This 4 Language Understanding Benchmark) to 46 has inspired efforts to create benchmark datasets 5 facilitate NLP research in the chemical 47 in the science domain as well (Yu Gu *et al.*, 2020). 6 industry. We have 4 datasets consisted of 48 However, these attempts are limited within the 7 text and token classification tasks. As far as 49 field of biology and medicine. 8 we have recognized, it is one of the first 50 In chemistry, there are few datasets available, 9 examples of chemical language 51 however, as far as we know there are no 10 understanding benchmark datasets 52 benchmark datasets that include tasks for both 11 consisted of tasks for both patent and 53 literature articles and patents (Mysore *et al.*, 2019, 12 literature articles provided by industrial 54 Friedrich *et al.*, 2020, He *et al.*, 2021). Given the 13 organization. All the datasets are internally 55 predominant reliance on patents in the chemical 14 made by chemists from scratch. Finally, we 56 industry's research, especially in the early stages 15 evaluate the datasets on the various 57 of product development, it is important to have 16 language models based on BERT and 58 datasets with patent documents to enable 17 RoBERTa, and demonstrate the model 59 language models to comprehend the distinctive 18 performs better when the domain of the pre60 patent writing style, thereby performing better on 19 trained models are closer to chemistry 61 tasks with patent documents. 20 domain. We provide baselines for our 62 On the other hand, academic literature often 21 benchmark as 0.7818 in average, and we 22 hope this benchmark is used by many 63 serves as the source of information that leads to 23 researchers in both industry and academia. 64 new ideas for experimentation. Thus, it is critical 24 The CLUB can be downloaded at 65 to build a language model that understands both 25 https://huggingface.co/datasets/bluesky333/che 66 literature articles and patents and benchmark 26 mical_language_understanding_benchmark. 67 datasets with texts from both patents and papers 68 for the evaluation. 27 1 **Introduction** 69 In this paper, we present Chemical Language 70 Understanding Benchmark (CLUB) to facilitate 28 Transformer is the prevalent network architecture 71 NLP research in the chemical industry, especially 29 in natural language processing (NLP) (Vaswani et 72 the language model pre-training. CLUB consists of 30 al., 2017). It uses self-attention to capture each 73 two datasets for patents and two datasets for papers. 31 word's influence on another in a given text. 74 In terms of tasks, it includes two datasets for token 32 Leveraging this architecture, recent advances in 75 classification such as chemical named entity 33 pre-training language models has reached state-of76 recognition, and two datasets for text classification 34 the-art performances on many NLP benchmark 77 such as patent area classification. All these datasets 35 datasets, including results that surpassed human 78 are internally made by chemists. We do not rely on 36 performance (Wang *et al.*, 2019). Such 79 any preexisting publicly available datasets or 37 advancements in language models and NLP 80 shared tasks. Finally, we provide the performance 38 technologies can potentially streamline and 81 of various language models including the ones pre39 simplify the labor-intensive work for the literature 82 trained with chemistry literature articles and 40 and patent analysis, which are crucial in the 83 patents as the baselines for our benchmark datasets. 41 research and development domain. | Average | | | | | | |-----------------------------------------------------------------|-----------------------------------------------------------|-------------------------------------------------|---------------------------------------------------------------------|-----|-----| | Sample | | | | | | | Tasks | Class Group | token | | | | | Type | Class name | Definition | Train | Dev | | | (source corpus) | length | | | | | | (Number) | (std) | Household | Patents for products used in household such as PET bottles | 436 | 120 | | PETRO | | | | | | | CHEMICAL | Paragraph | 448.19 | | | | | (2,775) | (403.81) | | | | | | (Patent) | Construct | Patents for products used in | | | | | construction such as PVC pipes | 77 | 25 | | | | | Automobile | Patents for products used in automobile such as Tires | 312 | 89 | | | | HouseConst | Patents for products used in household and construction | 481 | 93 | | | | IndustConst | Patents for products used in industrial and construction | 274 | 62 | | | | Catalyst | Patents for catalyst used for production | 334 | 94 | | | | Text CLS | Process | Patents for production process of the products | 306 | 72 | | | Biodegrad_Poly | biodegradable polymer (plastic material) | 553 | 151 | | | | RHEOLOGY | Sentence | 55.04 | | | | | (Journal) | (2,017) | (16.46) | the crystal structure of polymer | | | | Poly_Struc | which is related with mechanical | 421 | 105 | | | | properties | | | | | | | Biodgrad_Prop | biodegradable property of polymer | 470 | 97 | | | | Mechanical_Prop | mechanical property of polymer | 90 | 31 | | | | rheological property of polymer | | | | | | | Rheological_Prop | which is related with polymer | 78 | 19 | | | | processability | | | | | | | Precatalyst | Pre-catalyst form of metallocene catalyst | 365 | 71 | | | | CATALYST | Sentence | 42.07 | | | | | (Patent) | (4,663) | (14.59) | Include monomers and | | | | comonomers that participate in | | | | | | | Olefin | 947 | 153 | | | | | the synthesis of supported catalyst | | | | | | | Solvent | A solvent that creates a reaction environment | 1,287 | 356 | | | | Additives necessary for the catalyst synthesis reaction | | | | | | | Additive | 402 | 131 | | | | | include scavengers and cocatalysts. | | | | | | | Token CLS | Support | Support material for synthesis | 417 | 83 | | | Lithium compound used for cathode electrode among the | | | | | | | Cathode_Material | 1,411 | 402 | | | | | components of lithium ion battery | | | | | | | BATTERY | Sentence | 40.73 | | | | | (Journal) | (3,750) | (10.79) | Materials coated for the purpose of improving structural stability | | | | Coating_Material | 1,510 | 359 | | | | | and chemical resistance of cathode materials | | | | | | | Method for coating the coating | | | | | | | Coating_Method | material on the surface of the | 409 | 134 | | | | cathode material | | | | | | | Table 1: CLUB datasets for text and token classification (CLS). | | | | | | ## 85 2 Tasks 137 2.2 Token Classification 138 Token classification, which includes named entity 86 The CLUB Benchmark is created from scratch to 139 recognition task, identifies tokens belonging to 87 evaluate language models that understand the 140 defined classes. Considering our interests, we 88 fields of chemistry and materials science. The 141 defined the CATALYST class group and the 89 benchmark dataset includes two types of tasks: text 142 BATTERY class group as shown in Table 1. We 90 classification and token classification. To evaluate 143 created the named entity recognition benchmark 91 the representation power of the language model for 144 dataset based on these definitions. The labeling 92 both patents and literature articles, each task 145 was performed by expert researchers with over 93 consisted of a dataset created from the patent text 146 five years of experience in relevant fields. The 94 and a dataset created from the paper text. Various 147 labeling was done in IOB format (inside, outside, 95 topics such as polymers, rheology, catalysts, and 148 beginning). The labeled data was then converted 96 batteries were selected to evaluate different fields 149 into JSON format with "id", "tokens", and "labels" 97 of chemistry and materials science. The detailed 150 as keys. 98 composition of the data set is summarized in Table 151 We preprocess the token classification datasets 99 1. 152 to adjust the sentence length to be less than the 153 maximum sequence length. As for named entity 100 2.1 **Text Classification** 154 recognition, each token has labels, and tokens that 155 come after the maximum sequence length would 101 Text classification task is to assign a sentence or 156 be discarded. Thus, the model would not be able 102 document to a proper class. In this paper, we 157 to learn from those discarded tokens. We 103 present two classification datasets: RHEOLOGY 158 minimized this issue by making the distribution of 104 for sentence classification and 159 the sequence length more like the gaussian 105 PETROCHEMICAL for document classification. 160 distribution (Appendix A). 106 These datasets comprise corpora from both 107 patents and journal articles with a focus on the 161 162 **CATALYST** is a dataset for recognizing 108 topics of polymers, rheology, and overall 163 materials involved in catalyst synthesis reactions 109 petrochemicals. Each dataset is available in JSON 164 in the full text of patents. Pre-catalyst, additive, 110 format with "id", "sentence", and "labels" as keys. 165 olefin, solvent, and supporting material are 111 166 substances that participate in this reaction, and 112 **RHEOLOGY** sentence classification dataset 167 these are defined as classes. "Pre-catalyst" is the 113 contains the five groups that represent the 168 main substance to make the catalyst. "Additives" 114 polymer structures and properties, especially for 169 are added to make the polymer with different 115 biodegradable polymers. It consists of 2,017 170 characteristics. "Olefin" is the monomer that 116 sentences collected from the research paper. Each 171 makes the polymer using the catalyst. "Solvent" is 117 sentence of the RHEOLOGY classification 172 for the polymerization of the monomer to the 118 dataset is annotated by experts manually. The 173 polymer for the catalyst. "Supporting material" is 119 detailed explanation of each group is presented in 174 used to support the catalyst to do the 120 Table 1. 175 polymerization better as well as more stable. 121 122 **PETROCHEMICAL** dataset categorizes patents 176 177 **BATTERY** is a dataset for recognizing cathode 123 into seven groups within the petrochemical 178 materials from literature articles related to 124 industry. Each group of patents accounts for 179 lithium-ion batteries including all-solid-state 125 important parts of the industry. The petrochemical 180 batteries. There are four key components of a 126 industry uses **catalysts** to make the final polymer 181 battery: cathode material, anode material, 127 products for different applications such as PET 182 separator, and electrolyte. "Cathode material" 128 bottles (**household applications**), rubber 183 refers to the lithium compound used in the 129 (automobile **applications**), and PVC plastics 184 positive electrode of a battery and is the most 130 (construction **applications**). This production is 185 important element in a battery because it has a 131 done on a factory scale, so it has its **production** 186 decisive effect on the energy density, power 132 **process**. The seven groups consist of 5 187 output, and cycle life of the battery. This dataset 133 applications: 1) household, 2) automobile, 3) 188 also has "coating material" and "coating method" 134 construction, 4) household & construction, and 5) 189 classes which are material and method to coat the 135 automobile & construction. The other two groups 190 surface of the cathode material. 136 are catalysts and processes. ## 191 3 Dataset Statistics 238 4 Methods 192 All datasets have been divided into a training set 239 4.1 **Baseline Models** 193 and a development set (also known as the 240 **BERT-Base** We use the BERT-base weights 194 evaluation set), following an 80/20 split ratio. 241 released on Hugging Face model repository 242 (Devlin *et al.*, 2018). Both **cased** and **uncased** 195 3.1 **PETROCHEMICAL dataset** 243 versions of the model are used. We refer to each 196 The PETROCHEMICAL dataset is composed of 244 version as **BERT-cased** and **BERT-uncased** 197 2,775 paragraphs. As the dataset is created with 245 respectively throughout our papers. The model is 198 titles, abstracts, and claims of patents, so it has the 246 pre-trained with a corpus made up of BooksCorpus 199 average paragraph length of 448.19 tokens, which 247 and text parts of English Wikipedia for 1 M steps. 200 is considerably longer than the other three datasets. 248 The corpus is about 16GB. The pre-training batch 201 Also, the standard deviation for the paragraph 249 size is 256 sequences. This model utilizes a 202 length is 403.81 tokens, which is also larger than 250 wordpiece vocabulary. The vocab size is 28,894. 203 the others. For the seven classes of the dataset, the 251 204 respective counts of paragraphs are as follows: 252 **BioBERT**We use **BioBERT-v1.2** weights released 205 "Household" - 556, "Construct" - 102, 253 on Hugging Face model repository (Lee *et al.*, 206 "Automobile" - 401, "HouseConst" - 574, 254 2020). This is a BERT-base-cased model pre207 "IndustConst" - 336, "Catalyst" - 428, and 255 trained with PubMed abstracts from the BERT- 208 "Process" - 378. 256 base-cased initial checkpoints. It was trained for 257 200K steps on PubMed abstracts, 270K steps on 209 3.2 **RHEOLOGY dataset** 258 PubMed Central (PMC) full texts, and another 1 M 210 The RHEOLOGY dataset is made up of 2,017 259 steps on PubMed abstracts. The pre-training corpus 211 sentences with an average sentence length of 260 is about 25GB. The pre-training batch size is 192. 212 55.03 tokens. The standard deviation of the 261 As a continued pre-trained model, it uses the same 213 sentence length is 16.46 tokens. 704 sentences 262 vocabulary as the **BERT-base-cased** model. 214 were labeled as "Biodegrad_Poly" class and 526 263 215 sentences were labeled as "Poly_Struc". The 264 **SciBERT** We use **sciBERT-scivocab-uncased** 216 "Biodegrad_Prop", "Mechanical_Prop", and 265 released on Hugging Face model repository 217 "Rheological_Prop" classes, which are classes 266 (Beltagy *et al.*, 2019). This is a pre-trained BERT 218 related to material's properties, were labeled with 267 model with 1.14 M Semantic Scholar papers, 219 567, 121, and 97 sentences, respectively. 268 which is comprised of computer science (18%) and 269 biomedical domain (82%). It differs from 220 3.3 **CATALYST dataset** 270 BioBERT as it is pre-trained from scratch. The 271 papers are full texts and resulting in a corpus size 221 The CATALYST dataset consists of 4,663 272 of 20GB. The pre-training batch size and steps are 222 sentences. The average sentence length is 42.07 273 unknown. It has its own wordpiece vocabulary 223 tokens with 14.59 tokens for standard deviation. 274 made from the pre-training corpus. The vocabulary 224 "Solvent" class was labeled the most with 1,643 275 has more science terms. The vocab size is 30,990. 225 times, followed by "Olefin" class which as labeled 226 1,100 times. "Precatalyst", "Addtive", and 276 277 **RoBERTa**We use **RoBERTa-base** model released 227 "Support" were labeled 436, 533, and 500 times, 278 on Hugging Face model repository (Liu *et al.*, 228 respectively. 279 2019). It is an improvement of BERT model with a 229 3.4 **BATTERY dataset** 280 larger pre-training dataset and better optimized 281 hyperparameter settings. The model is pre-trained 230 The BATTERY dataset consists of 3,750 282 with a 160GB corpus made up of BERT pre231 sentences, and the average sentence length is 283 training corpus plus News and Web contents 232 40.73 tokens with 10.79 tokens as standard 284 crawled. It is trained for 1 M steps. The pre-training 233 deviation. The token classification breakdown 285 batch size is 256 sequences. The model uses byte 234 shows that "Cathode_Material" and 286 pair encoding (BPE) vocabulary, which is different 235 "Coating_Material" classes were labeled 1,813 287 from BERT's wordpiece vocabulary. The vocab 236 times and 1,869 times, respectively. Meanwhile, 288 size is 50,000. 237 the "Coating_Method" class was 543 times. | Text classification (Accuracy) | Token classification (F1) | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|---------|---------|--------|--------| | Task | RHEOLOGY | PETRO | | | | | CHEMICAL | CATALYST | BATTERY | Average | | | | BERT-cased | 0.7970 | 0.8099 | 0.6601 | 0.7532 | 0.7550 | | BERT-uncased | 0.7921 | 0.8105 | 0.6944 | 0.7571 | 0.7635 | | RoBERTa | 0.7958 | 0.7990 | 0.6899 | 0.7658 | 0.7626 | | BioBERT | 0.7978 | 0.8086 | 0.7092 | 0.7636 | 0.7698 | | SciBERT | 0.7938 | 0.8045 | 0.7314 | 0.7602 | 0.7724 | | RoBERTa-PM-M3 | 0.7983 | 0.8079 | 0.7194 | 0.7815 | 0.7767 | | RoBERTa-lit | 0.8017 | 0.8126 | 0.7332 | 0.7772 | 0.7811 | | RoBERTa-lit-pat | 0.7968 | 0.8205 | 0.7323 | 0.7777 | 0.7818 | | Table 2: Performance of the model for the benchmark tasks. The evaluation for the text classification tasks was done using accuracy and the evaluation of the token classification tasks was done using macro-average | | | | | | 289 **RoBERTa-PM-M3** We use **RoBERTa-base-PM** 290 weights released on Hugging Face model 291 repository (Lewis *et al.*, 2020). It is a RoBERTa292 **base** model pre-trained with a text corpus made of 293 27GB of PubMed abstracts, 60GB of PMC full 294 texts, and 3.3 GB of the Medical Information Mart 295 for Intensive Care (MIMIC-III). The model is 296 trained for 500K steps on the corpus with a batch 297 size of 8,192 sequences. It uses byte-pair encoding 298 vocabulary made from the corpus, so it has a 299 different BPE encoding vocabulary from 300 RoBERTa-base. The vocabulary has more 301 biomedical terms. The vocab size is 50,000. 302 316 corpus for 1 epoch. 317 318 **RoBERTa-lit-pat** We use **RoBERTa-lit** weights 319 as the initial checkpoint to pre-train the model this 320 time with chemistry patents. We collected the 321 patents using USPTO BulkDownload. We filtered 322 the chemical patents using the CPC code. For the 323 filtered ones, abstracts, claims, and embodiment 324 texts were used as the training corpus together 325 with the **RoBERTa-lit**'s corpus. We train the 326 model with the corpus for 1 epoch. 327 328 **RoBERTa-lit** and **RoBERTa-lit-pat** were pre-329 trained with NVIDIA V100 GPU and the 330 hyperparameter setting follows the pre-training 331 setup for **RoBERTa-PM-M3**. We also used 332 mixed precision for training. We used the masked 333 language model objective for the pre-training. 334 335 We expect that by pre-training the models with 336 chemistry data, the models can learn the 337 chemistry domain knowledge better and thus 338 perform better on the CLUB benchmark. 339 303 4.2 **Pre-training** 304 For the chemistry pre-training, we gathered a 305 large amount of chemistry patents and literature 306 articles to train two different versions of models. 340 4.3 **Finetuning Language Models** 307 **RoBERTa-lit** We use **RoBERTa-PM-M3** 308 weights as the initial checkpoint to pre-train the 341 For each dataset, we fine-tuned each models for 309 model with chemistry articles. We collected the 342 10 epochs with a 5e-05 learning rate on a single 310 abstracts of the articles using Open Academic 343 V100 GPU. We used 0.1 warm-up ratio, and 311 Graphs and used the chemistry field of study to 344 cosine with restarts as the learning scheduler type. 312 filter the ones that belong to the chemistry domain 345 The training batch size was 128 and the evaluation 313 (Tang *et al.*, 2008 and Sinha *et al.*, 2015). For the 346 batch size was 128. The maximum input length 314 filtered ones, all the abstracts were used as the 347 was 256. AdamW was used as the optimizer with 315 training corpus. We train the model with the 348 a weight decay of 0.01. We used mixed precision 349 for efficient training. We fine-tuned the model for 350 10 different seed initializations. 351 4.4 **Evaluation** 352 We evaluated all models using the accuracy for 353 text classification tasks and the macro-average F1 354 score for token classification tasks. We chose the 355 accuracy as the evaluation metric for the text 356 classification due to its interpretability in 357 measuring the effectiveness of the models. For 358 token classification tasks, the use of the IOB 359 scheme, which resulted in the "O" label being the 360 dominant class, limited us from using the 361 evaluation metric as text classification tasks. To 362 provide a more balanced evaluation, we computed 363 the F1 score of each token class excluding the "O" 364 class, and used the macro-average of these F1 365 scores as the evaluation metric. For both types of 366 tasks, the performance was averaged over ten runs 367 with different seed initializations to reduce 368 variance caused by randomness. ## 419 6 Conclusion 369 5 Results And Discussion 400 that RoBERTa-lit model, solely pre-trained on 401 academic papers, showed the best results in the task 402 with patents. This task involved labeling only the 403 embodiment section of the patent. The terminology 404 used in the embodiment part of the patent is closer 405 to academic language than the language used in 406 patent claims. This could explain why a model 407 trained only on articles could perform better in this 408 task. 409 For the BATTERY task, RoBERTa-PM-M3 410 model had the best performance, closely followed 411 by RoBERTa-lit-pat model. Notably RoBERTa-lit 412 and RoBERTa-lit-pat models still showed good 413 average performance despite only being pre-414 trained for one epoch. It is plausible that the 415 performance of RoBERTa-lit-pat improves further 416 with additional training epochs. Due to our GPU 417 infrastructure limitations, we leave this for future 418 work. 420 Chemical Language Understanding Benchmark 370 The performance of each model on the benchmark 421 (CLUB) is the first benchmark in the chemistry 371 tasks is shown in Table 2. In general, our 422 industry aimed at chemical language model 372 RoBERTa-lit-pat model outperformed the other 423 evaluation with tasks for both patents and journal 373 models on average across the tasks. The result of 424 articles. The introduction of this benchmark is 374 BioBERT models pre-trained with a bio-related 425 expected to catalyze research in natural language 375 corpus was better than that of BERT base models, 426 processing, particularly in information extraction, 376 highlighting the impact of domain specific pre427 within the chemistry domain. 377 training. SciBERT model pre-trained with a broad 428 In the course of establishing baseline 378 scientific literature articles performed well, 429 performance for the CLUB, we tested existing pre379 especially in CATALYST task, though it still had 430 trained models as well as our novel pre-trained 380 a lower performance than RoBERTa models pre431 models. Remarkably, the RoBERTa model pre381 trained with chemistry corpus. RoBERTa-PM-M3 432 trained on cheimcal patents and literature articles, 382 model outperformed other models in the 433 reached the highest average score, 0.7818. This 383 BATTERY task, but its overall performance was 434 performance highlights the advantage of pre384 lower than that of the RoBERTa-lit-pat model. 435 training models with a corpus closely aligned with 385 In the text classification task, RoBERTa-lit 436 the target domain. 386 model was the best model in the RHEOLOGY task 437 Our benchmark provides a powerful tool for 387 and RoBERTa-lit-pat model score the highest in the 438 evaluating language models' learning capacity in 388 PETROCHEMICAL task. This suggests that 439 the chemistry context. In addition, the tasks in our 389 inclusion of patents in pre-training yields better 440 benchmark can be leveraged to accelerate the 390 performance in tasks with patent documents. Asthe 441 literature and patent analysis by automatically 391 PETROCHEMICAL dataset includes titles, 442 extracting information such as new chemical 392 abstracts, and representative claims of patents, the 443 molecules and experiment settings. 393 terminology used in the dataset is quite different 444 Thus, these tasks can be the foundation of an 394 from the terminology used in other datasets made 445 information extraction based expert system. This 395 up of literature articles. This is due to the nature of 446 system would generate structured knowledge from 396 patents to protect an invention, leading them to be 447 a large volume of papers and patents and help 397 written in a more general manner to encompass a 448 researchers to conduct their experiments on time 398 broader patent space. 449 without falling behind the research trends. 399 In the CATALYST task, it was very interesting 450 Our benchmark sets the foundation for future 451 advancements in chemical language understanding. 500 Alex Wang, Yada Pruksachatkun, Nikita Nangia, 452 It contributes to the acceleration of scientific 501 Amanpreet Singh, Julian Michael, Felix Hill, Omer 453 discovery in the field by integrating natural 502 Levy, Samuel R. Bowman. 2019. SuperGLUE: A 454 language processing into chemical research and 503 Stickier Benchmark for General-Purpose Language 455 development. 504 Understanding Systems. arXiv preprint 505 arXiv:1905.00537. ## 456 Limitations 457 Because we were doing the manual labeling with 458 experts in the field, we were only limited to two 459 types of tasks: token classification and text 460 classification. We hope to expand the benchmark to 461 include other types of tasks such as summarization, 462 question and answering, and sentence similarity in 463 the future. Sentence similarity for patents is the task 464 we aim to add for the next version because it can 465 be used to find the infringement in patents. 466 While the CLUB provides a robust benchmark for 467 evaluating language models in the context of 468 chemistry, it is not without its limitations. The 469 present version of CLUB only includes two types 470 of tasks: token classification and text classification. 471 This constraint arises primarily from the manual 472 labeling process which involved domain experts. 473 However, we aim to extend the benchmark in the 474 future to include a wider range of tasks such as 475 summarization, question answering, and sentence 476 similarity assessments. We are particularly 477 interested in the sentence similarity task for patents 478 as this could be leveraged for identifying potential 479 patent infringements. ## 480 Acknowledgements 532 understanding. arXiv preprint arXiv:1810.04805. 481 We express our sincere gratitude to the anonymous 533 Jiayuan He, Dat Quoc Nguyen, Saber A. Akhondi, 482 reviewers who contributed their valuable time and 534 Christian Druckenbrodt, Camilo Thorne, Ralph 483 effort to provide insightful and constructive 535 Hoessel, Zubair Afzal, Zenan Zhai, Biaoyan Fang, 484 feedback on this work. Your detailed comments 536 Hiyori Yoshikawa, Ameer Albahem, Lawrence 485 and suggestions have greatly aided us in refining 537 Cavedon, Trevor Cohn, Timothy Baldwin, and 486 our work. We would also like to extend our thanks 538 Karin Verspoor. 2021. Chemu 2020: Natural 539 language processing methods are effective for 487 to everyone involved in the creation and 540 information extraction from chemical patents. 488 development of the labeled datasets. This work 541 *Frontiers in Research Metrics and Analytics*, 6, 489 would not have been possible without your 542 654438. 490 dedication and collaborative efforts. Last but not 543 Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, 491 the least, we are grateful for the continuous support 544 and Zhong Su. ArnetMiner: Extraction and Mining 492 and resources provided by our institutions, which 545 of Academic Social Networks. 2008. In *Proceedings* 493 have been fundamental in conducting this research. 546 *of the Fourteenth ACM SIGKDD International* 547 *Conference on Knowledge Discovery and Data* 494 **References** 548 *Mining (SIGKDD'2008)*. pages 990-998. 495 Alex Wang, Amanpreet Singh, Julian Michael, Felix 496 Hill, Omer Levy, Samuel R. Bowman. 2018. GLUE: 497 A Multi-Task Benchmark and Analysis Platform for 498 Natural Language Understanding. arXiv preprint 499 arXiv:1804.07461. 506 Annemarie Friedrich, Heike Adel, Federico Tomazic, 507 Johannes Hingerl, Renou Benteau, Anika 508 Marusczyk, and Lukas Lange. 2020. The SOFC- 509 Exp Corpus and Neural Approaches to Information 510 Extraction in the Materials Science Domain. In 511 *Proceedings of the 58th Annual Meeting of the* 512 *Association for Computational Linguistics*, pages 513 1255–1268, Online. Association for Computational 514 Linguistics. 515 Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, 516 Darrin Eide, Bo-June (Paul) Hsu, and Kuansan 517 Wang. 2015. An Overview of Microsoft Academic 518 Service (MAS) and Applications. In *Proceedings of* 519 *the 24th International Conference on World Wide* 520 *Web (WWW '15 Companion)*. ACM, New York, NY, 521 USA, pages 243-246. 522 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob 523 Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz 524 Kaiser, and Illia Polosukhin. 2017. Attention is all 525 you need. arXiv preprint arXiv:1706.03762. 526 Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. 527 SciBERT: A Pretrained Language Model for 528 Scientific Text. arXiv preprint arXiv:1903.10676. 529 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and 530 Kristina Toutanova. 2018. Bert: Pre-training of deep 531 bidirectional transformers for language 549 Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, 550 Donghyeon Kim, Sunkyu Kim, Chan Ho So and 551 Jaewoo Kan. 2020. BioBERT: a pre-trained 552 biomedical language representation model for 553 biomedical text mining. *Bioinformatics*, Volume 36, 554 Issue 4, February 2020, pages 1234–1240. 555 Patrick Lewis, Myle Ott, Jingfei Du, and Veselin 602 A **Adjust sentence length** 556 Stoyanov. 2020. Pretrained Language Models for 557 Biomedical and Clinical Tasks: Understanding and 603 Figure 1. shows the distribution of sentence lengths 558 Extending the State-of-the-Art. In *Proceedings of the* 604 in the dataset before and after the preprocessing. 559 *3rd Clinical Natural Language Processing* 605 After adjusting the sentence length, the sequence 560 *Workshop*, pages 146–157, Online. Association for 606 length distribution follows more of a Gaussian 561 Computational Linguistics. 607 distribution than before. In the case of CATALYST 562 Sheshera Mysore, Zach Jensen, Edward Kim, Kevin 608 dataset, the number of sentences was reduced from 563 Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey 609 12,368 to 4,663. However, in the case of 564 Flanigan, Andrew McCallum, Elsa Olivetti. 2019. 610 BATTERY dataset, there was no change in the 565 The Materials Science Procedural Text Corpus: 611 number of the sentences. We made this 566 Annotating Materials Synthesis Procedures with 612 preprocessing to minimize the number of tokens 567 Shallow Semantic Structures. arXiv preprint 613 that come after the maximum sequence. 568 arXiv:1905.06939. 614 569 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, 570 Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, 571 Luke Zettlemoyer, and Veselin Stoyanov. 2019. 572 RoBERTa: A Robustly Optimized BERT Pretraining 573 Approach. arXiv preprint arXiv:1907.11692. 574 Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto 575 Usuyama, Xiaodong Liu, Tristan Naumann, 576 Jianfeng Gao and Hoifung Poon. 2021. Domain577 Specific Language Model Pretraining for 578 Biomedical Natural Language Processing. In *ACM* 579 *Transactions on Computing for Healthcare*, pages 580 1–23. ![7_image_0.png](7_image_0.png) 616 617 581 582 583 584 585 586 587 588 590 591 592 593 594 595 596 597 598 599 600
park-etal-2023-hypert5
{H}yper{T}5: Towards Compute-Efficient {K}orean Language Modeling
https://aclanthology.org/2023.acl-industry.40
Pretraining and fine-tuning language models have become the standard practice in industrial natural language processing (NLP), but developing and deploying general-purpose language models without the abundant computation or data resources is a real-world issue faced by smaller organizations or communities whose main focus is languages with less accessible resources (e.g., non-English). This paper explores the sequence-to-sequence (seq2seq) language model architecture as a more practical and compute-efficient alternative to the decoder-oriented approach (e.g., GPT-3), accompanied by novel findings in compute-optimality analyses. We successfully trained billion-scale Korean-language seq2seq language models that strongly outperform other competitive models in Korean benchmarks. Moreover, we demonstrate that such language models can be more efficiently utilized by employing a heavy pre-finetuning strategy, by showcasing a case study on dialog-task adaptation. Our case study shows that adopting language models with more readily available domain-specific unlabeled data greatly improves fine-tuning data efficiency in low-resource settings.
# Hypert5: Towards Compute-Efficient Korean Language Modeling Dongju Park∗ Soonwon Ka∗ Kang Min Yoo1∗ **Gichang Lee Jaewook Kang** NAVER Cloud 1Seoul National University {dongju.park,soonwon.ka,kangmin.yoo}@navercorp.com ## Abstract Pretraining and fine-tuning language models have become the standard practice in industrial natural language processing (NLP), but developing and deploying general-purpose language models without the abundant computation or data resources is a real-world issue faced by smaller organizations or communities whose main focus is languages with less accessible resources (e.g., non-English). This paper explores the sequence-to-sequence (seq2seq) language model architecture as a more practical and compute-efficient alternative to the decoder-oriented approach (e.g., GPT-3), accompanied by novel findings in compute-optimality analyses. We successfully trained billion-scale Korean-language seq2seq language models that strongly outperform other competitive models in Korean benchmarks. Moreover, we demonstrate that such language models can be more efficiently utilized by employing a heavy pre-finetuning strategy, by showcasing a case study on dialog-task adaptation. Our case study shows that adopting language models with more readily available domain-specific unlabeled data greatly improves fine-tuning data efficiency in lowresource settings. ## 1 Introduction Pretraining large-scale Transformer-based language models and finetuning them for specific tasks have become the cornerstone of modern NLP pipelines. Among various Transformer-based language model architectures proposed in the field, generative decoder-based architectures, such as the GPT family (Brown et al., 2020), have gained more traction from their impressive ability to scale well into large language models (LLMs) (Kaplan et al., 2020; Chowdhery et al., 2022) and follow highlevel natural language instructions with few or even in the absence of demonstrations (Wei et al., 2022). ∗ Equal contributions. ![0_image_0.png](0_image_0.png) However, acquiring pretrained language models (PLMs) is a data- and compute-intensive process (Patterson et al., 2021), which many organizations cannot afford to pursue. This disparity is exacerbated for the communities of non-English or nonLatin languages (e.g., Korean) that have limited access to resources and share fewer linguistic features with English, making the cross-lingual transfer from the top language more challenging (Scao et al., 2022). As a more cost-efficient alternative to the pure generative architecture, the sequence-to-sequence (seq2seq) Transformer (T5) (Raffel et al., 2022) may offer a reasonable middle ground between the generative LM and the encoder-oriented architecture (e.g., BERT (Devlin et al., 2019)), which are known to lack robustness in generation abilities. Additionally, T5 has been demonstrated to 412 | Data Source | Accessibility | Tokens | |---------------------|----------------------|----------| | Blog | Proprietary | 146.1B | | Online Community | Proprietary | 44.5B | | News | Proprietary | 39.4B | | Crawled Comments | Proprietary | 21.9B | | Korean QA Website | Proprietary | 14.6B | | Modu Datasets | Public | 3.2B | | En. & Jp. Wikipedia | Public | 2.8B | | Others | Public / Proprietary | 27.5B | | Total | 300B | | Table 1: Data sources of the pretraining corpus. be good few-shot learners (Liu et al., 2022a) and task/domain adapters (Aribandi et al., 2022; Gupta et al., 2022). This paper aims to provide an industrial perspective on the language model pretraining strategies with small- to medium-scale budgets. We conduct compute-optimality analyses to find the optimal pretraining strategy given our compute budget and argue that the text-to-text Transformer architecture is a superior approach compared to decoder-only models under restricted compute resources (§5.1). Based on this finding, we share our experience with training HyperT5 (§3), the state-of-the-art seq2seq Transformer for the Korean language (Figure 1). Moreover, we showcase how HyperT5 can be further refined to improve data and modeling efficiency in specific domains (i.e. dialogs) using relatively abundant unlabeled resources (§4). ## 2 Related Work PLMs and Efficiency As we gain more understanding of core PLM architectures (Devlin et al., 2019; Brown et al., 2020; Raffel et al., 2022) and their scaling laws (Kaplan et al., 2020), research efforts for improving their training efficiencies have diverged in various directions, including improving the scaling curve (Tay et al., 2022; Chung et al., 2022), maximizing optimality (Hoffmann et al., 2022), and efficient fine-tuning (Lester et al., 2021; Hu et al., 2022). Our work offers a comprehensive overview and case study of optimizing the language modeling efficiency for a less resourceful language. Non-English Language Models Previously, significant works have explored pretraining language models on multiple languages to support lowresource languages and maximize cross-lingual knowledge transfer without explicit supervision (Devlin et al., 2019; Conneau and Lample, 2019; Xue et al., 2021; Scao et al., 2022). Recently, language models that target specific non-English languages started to become more common (Zeng et al., 2021; Kim et al., 2021a; Nagoudi et al., 2022; Fuadi et al., 2023), especially low-resource or nonLatin languages that share fewer commonalities with English. There have been several Korean textto-text Transformers proposed in the past, such as KoBART* and KE-T5 (Kim et al., 2021b), but our work, among other Korean seq2seqs, is the first to systematically achieve powerful billion-scale seq2seq models and conduct extensive analyses in terms of efficiency and performance. ## Dialog-Oriented Language Models Adapting language models for the purpose of building dialog agents has been a long-standing goal in the language modeling community (Zhang et al., 2020; Adiwardana et al., 2020; Roller et al., 2021). However, from the industrial application perspective, building dialog response generators is not the only task that can benefit from the advances in language modeling. In a more recent line of work, several approaches have been proposed to prepare language models for various dialog-related tasks (Mehri et al., 2019; Gu et al., 2021; Chen et al., 2022). In parallel to dialog adaptation, multi-task fine-tuning is another line of work that covers dialog-related tasks, as a subset of dialog-related tasks is included in the multi-task set, and expanding the task set to cover dialog-related tasks is trivial (Sanh et al., 2022; Aribandi et al., 2022). ## 3 Hypert5 This section describes the details of the pretraining corpus, pretraining strategy, and evaluation methods. ## 3.1 Pretraining Corpus Inspired by the pretraining corpus proposed by Kim et al. (2021a), we design our pretraining data to cover a wide range of domains and data distributions to ensure that the model trained on top of the corpus will achieve robustness and generalizability. Various sources of written and spoken texts are included in the corpus (Table 1), although online web texts take a large portion of the data (Table 5). While online texts are certainly vulnerable to the compound effect of biases, the collectively massive and unfiltered nature provides a comprehensive impression of the language distribution *https://github.com/SKT-AI/KoBART | Model | nlayer | dmodel | nhead | dhead | dff | |--------------|----------|----------|---------|---------|-------| | HyperT5SMALL | 16 | 512 | 6 | 64 | 1024 | | HyperT5BASE | 24 | 768 | 12 | 64 | 2048 | | HyperT5LARGE | 48 | 1024 | 16 | 64 | 2816 | | HyperT51.7B | 48 | 1536 | 24 | 64 | 4096 | | HyperT53B | 48 | 2048 | 32 | 64 | 5120 | (Scao et al., 2022). We conducted additional studies to investigate the feasibility of data composition re-adjustment through sampling (Appendix A.1). However, we found that the benefit was not clearcut. ## 3.2 Pretraining Setup Our research employs the transformer encoderdecoder architecture, similar to T5 of Google (Raffel et al., 2022). However, we have opted for the T5.1.1 structure, a variation of T5, due to its superior performance based on experimentation results. It is worth noting that our HyperT51.7B model size is not a derivative of Google's T5, but rather an interpolation of the LARGE and the 3B model. Detailed information on the configuration of different model sizes can be found in Table 2. For all model has been pre-trained on a total of 300B tokens, utilizing the replace corrupted spans method proposed by Google's T5 as one of their unsupervised objectives. Specifically, we set the corruption rate to 15%, while maintaining a mean span length of 3. Furthermore, we employed the inverse square root learning rate schedule with 10k warmup steps at a learning rate of 0.01 when using the Adafactor optimizer (Shazeer and Stern, 2018). Both of our pretrained models were trained using a batch size of 1024 and a maximum sequence length of 512 for the encoder and decoder, respectively. By using distributed data-parallel (Li et al.), we were able to parallelize the training process across multiple GPUs, effectively reducing the overall training time and enabling us to train larger models with higher performance. Specifically, we used 64 A100 GPUs for the small to large models and 1024 A100 GPUs for the 1.7B and 3B models. ## 3.3 Evaluation Methods Benchmark The primary objective of the HyperT5 evaluation is to address various natural language processing tasks specific to the Korean language in a reproducible way. To quantify the effectiveness of our model in these tasks, we designed a series of benchmarking experiments that cover a wide range of tasks. The detailed components of our benchmark are described in Appendix A.2. Note that while all of our benchmark datasets are publicly available for reproducibility, some datasets (YNAT, KLUE-NLI, KLUE-STS, KorQuAD) have not made the test set publicly available, hence some of the report values are based on the development or validation set where the test set is inaccessible. Baselines We compare not only structures that are identical to ours but also encoder and decodeexclusive architectures. Models based on the BERT and RoBERTa architecture released by KLUE (Park et al., 2021) are encoder-only models specialized for natural language understanding. On the other hand, HyperCLOVA (Kim et al., 2021a) is a decoder-only structure like GPT. Note that HyperCLOVA does not provide fine-tuning results, and thus, we compare the ICL and P-tuning (Liu et al., 2022b) results reported for this model. We also compare three models with the same structure as our model. KoBART has the encoder-decoder structure but follows the learning method and details of BART (Lewis et al., 2019). The mT5 (Xue et al., 2021) and KE-T5 (Kim et al., 2021b) models share the exact same structure as our HyperT5 model, with the difference being that mT5 is a multilingual model and KE-T5 is a Korean and English cross-lingual model. ## 3.4 Evaluation Results Main Benchmark Results On our Korean benchmark, HyperT5 achieves state-of-the-art performances across all tasks (Table 3), outperforming other seq2seq architectures by a large margin. Specifically, the smallest version of our model (97M) was able to perform on par with the largest KE-T5 (large) on the average benchmark (87.96 vs 88.61). Compared to large-scale decoder architectures, our largest model (3B) is still able to outperform the 39B-scale HyperCLOVA with p-tuning (93.29 vs 93.00). Although a more comprehensive benchmark is desirable, the preliminary results on NSMC suggest that our approach has an advantage in scaling efficiency for the current compute-budget range (§5.1). Parameter-Efficient Fine-Tuning To understand how our model can be further efficiently finetuned using parameter-efficient fine-tuning (PEFT) techniques, we benchmarked HyperT5 models that | Model | Params. | NSMC | YNAT | KLUE-NLI | KLUE-STS | KorQuAD | Avg. | |-----------------------------------------------------------------------------------------|-----------|---------|---------|------------|------------|---------------|--------| | Metrics | Acc. | F1 | F1 | Pearson | EM / F1 | | | | Encoder-Only Pretrained Language Models KLUE-BERTBASE 110M - | 85.73*† | 81.63*† | 90.85*† | - | - | | | | KLUE-RoBERTaSMALL | 68M | - | 84.98*† | 79.33*† | 91.54*† | - | - | | KLUE-RoBERTaBASE | 125M | - | 85.07*† | 84.84*† | 92.50*† | - | - | | KLUE-RoBERTaLARGE | 355M | 91.44 | 85.69*† | 89.17*† | 93.35*† | - | - | | Decoder-Only Pretrained Language Models HyperCLOVA (ICL) 13B 87.2∗ | - | - | - | - | - | | | | 39B | 88.0∗ | - | - | - | - | - | | | 82B | 88.2∗ | - | - | - | - | - | | | HyperCLOVA (P-Tuning) | 137M | 87.2∗ | - | - | - | - | - | | 13B | 91.7∗ | - | - | - | - | - | | | 39B | 93.0∗ | - | - | - | - | - | | | Encoder-Decoder Pretrained Language Models KoBARTBASE 124M 90.24∗ | - | - | - | - | - | | | | mT5SMALL | 300M | 88.82 | 83.57 | 70.18 | 80.95 | 70.83 / 82.02 | 81.11 | | mT5BASE | 580M | 89.59 | 86.57 | 78.27 | 89.09 | 75.74 / 86.17 | 85.94 | | mT5LARGE | 1.2B | 90.81 | 87.17 | 89.96 | 91.69 | 80.03 / 88.35 | 89.00 | | mT5XL | 3.7B | 90.34 | 86.58 | 87.20 | 90.58 | 78.58 / 87.53 | 88.45 | | KE-T5SMALL | 77M | 89.78 | 86.44 | 74.37 | 87.55 | 80.98 / 89.91 | 85.61 | | KE-T5BASE | 247M | 89.75 | 86.58 | 77.58 | 88.35 | 83.46 / 91.94 | 86.84 | | KE-T5LARGE | 783M | 91.09 | 86.94 | 86.15 | 86.15 | 84.19 / 92.72 | 88.61 | | HyperT5SMALL | 97M | 90.91 | 87.31 | 79.43 | 90.32 | 83.03 / 91.82 | 87.96 | | HyperT5BASE | 277M | 91.82 | 87.83 | 87.48 | 91.87 | 85.97 / 93.98 | 90.60 | | HyperT5LARGE | 822M | 93.02 | 88.31 | 92.39 | 93.09 | 87.98 / 94.95 | 92.35 | | HyperT51.7B | 1.7B | 93.11 | 88.43 | 93.02 | 93.43 | 88.32 / 95.22 | 92.64 | | HyperT53B | 3B | 93.29 | 88.65 | 94.07 | 93.98 | 88.74 / 95.58 | 93.11 | | ∗ Reported by the authors. † Reported on the test set, which is not publicly available. | | | | | | | | Table 3: Korean understanding and reasoning benchmark results for Korean language models of various architectures. Our model significantly outperforms all other models, regardless of size and architecture. | Model | Params. | NSMC | YNAT | KLUE-NLI | KLUE-STS | Avg. | |-------------------------------------------|----------------|----------------|----------------|----------------|----------------|----------------| | Metrics | Acc. | F1 | F1 | Pearson | | | | LST (Sung et al., 2022) HyperT5SMALL 1.3M | 88.96 (-2.14%) | 85.33 (-2.27%) | 72.15 (-9.17%) | 86.61 (-4.11%) | 83.26 (-4.29%) | | | HyperT5BASE | 5.1M | 89.80 (-2.20%) | 86.34 (-1.70%) | 78.87 (-9.84%) | 89.09 (-3.03%) | 86.03 (-4.15%) | | HyperT5LARGE | 17.9M | 91.21 (-1.95%) | 88.35 (+0.05%) | 86.61 (-6.26%) | 91.14 (-2.09%) | 89.33 (-2.59%) | | HyperT51.7B | 39.8M | 91.77 (-1.44%) | 88.50 (+0.08%) | 89.18 (-4.13%) | 91.65 (-1.91%) | 90.28 (-1.87%) | | HyperT53B | 69.3M | 92.02 (-1.36%) | 88.10 (-0.62%) | 90.10 (-4.22%) | 92.00 (-2.11%) | 90.56 (-2.10%) | | LoRA (Hu et al., 2022) HyperT5SMALL 0.2M | 88.96 (-2.14%) | 85.29 (-2.31%) | 73.25 (-7.78%) | 87.95 (-2.62%) | 83.86 (-3.60%) | | | HyperT5BASE | 0.5M | 90.60 (-1.33%) | 86.31 (-1.73%) | 84.43 (-3.49%) | 91.02 (-0.93%) | 88.09 (-1.85%) | | HyperT5LARGE | 1.3M | 92.22 (-0.86%) | 88.12 (-0.22%) | 91.46 (-1.01%) | 92.68 (-0.44%) | 91.12 (-0.64%) | | HyperT51.7B | 2M | 92.63 (-0.52%) | 88.58 (+0.17%) | 91.55 (-1.58%) | 92.97 (-0.49%) | 91.43 (-0.61%) | | HyperT53B | 2.7M | 93.19 (-0.11%) | 88.47 (-0.20%) | 93.43 (-0.68%) | 93.44 (-0.57%) | 92.13 (-0.39%) | Table 4: Parameter-efficient fine-tuning (PEFT) benchmarked on HyperT5. The relative performance loss in percentage is shown next to the corresponding results. Overall, a minor performance loss is observed across all tasks and PEFT techniques, despite using a small number of trainable parameters. are fine-tuned using LoRA (Hu et al., 2022) and Ladder-Side Tuning (LST) (Sung et al., 2022), respectively, and compared the performances against the full fine-tuning results in Table 3. Results show that the performance degradation of employing PEFT compared to the full fine-tuning baseline is less than 5% on average, while the ratio of parameters used for training is less than 2.3%. And as the model scales larger, the issue of performance degradation is relatively alleviated, falling to 0.39% for HyperT53B with LoRA. Model scaling and the specific PEFT technique to employ will be the key strategic factors for large-scale deployment. ## 4 Case Study: Efficient Adaptation For Dialog-Oriented Tasks Domain and task-family adaptation can further improve the utilization of PLMs in low-resource settings (Maronikolakis and Schütze, 2021). This section explores the use case for adapting HyperT5 to dialog-related tasks. ## 4.1 Training Setup For dialog adaptation, we propose to heavily train HyperT5 on a 1B-token unlabeled dialog-oriented data, with the *multiple utterance masking* (MUM) objective in the curriculum learning setting. We take the replace corrupted spans method to a more challenging strategy, MUM, to help the model hold a better understanding of dialog structures. During training, multiple utterances are randomly masked per dialog session with a pre-defined corruption rate. We further adopt curriculum learning to gradually raise the training difficulty by increasing the MUM corruption rate. HyperT5 models, from small to large, are trained for 5 epochs with a global batch size of 64 using 2 A100 GPUs. Like pretraining, the dropout rate is set to 0. MUM corruption rate sweeps sigmoidally from 5% to 40%. Training Data We collect a dialog-oriented training corpus from both open-sourced and proprietary Korean dialog datasets (Appendix B.1). The corpus consists of 3.3M dialog sessions†in various domains (e.g. social chats, customer service, broadcast transcripts, etc.). The resulting corpus provides a wide range of topics and aspects of different dialog-oriented tasks, making it suitable for dialog adaptation. ![4_image_0.png](4_image_0.png) ## 4.2 Evaluation Methods We conducted an extended series of benchmarking experiments for our dialog-adapted models (DialogHT5) in both scarce and full data settings. The benchmark results consist of three generative tasks, i.e., dialog in-filling (DI), dialog response generation (DR), and dialog summarization (DS), and one discriminative task, i.e., dialog classification (DC). We only use the open-sourced Korean dialog-oriented datasets from AI Hub‡. AI Hub task-oriented dialog (ToD) and open-domain dialog (ODD) datasets are used for both DI and DR, while AI Hub broadcasting media transcript summarization dataset (Script-Summ) for DS. Since the AI Hub ToD dataset is annotated with dialog topics, we used the dataset for DC (Appendix B.2). ## 4.3 Dialog Benchmark Results Data Scarce Experiments To investigate the data efficiency of DialogHT5, we first compare their performance against HyperT5 over scarce data settings. We benchmarked DialogHT5 and HyperT5 as sweeping the number of training samples from 0.1k to 5k. Results describe that DialogHT5 mostly achieves a higher score against HyperT5 (more details in Appendix B.3). Especially, DialogHT5 outperforms HyperT5 with the gain of up to 8 R1 score in the dialog in-filling task, as shown in Figure 2. DialogHT5SMALL can save the in-domain data resource approximately ten times to score tie with HyperT5SMALL of 5k training samples (Figure 2a), which highlights the effectiveness of dialog adaptation. Full Data Experiments Full data benchmark results show that DialogHT5 obtains higher performance compared to HyperT5 in all the experiments (Appendix B.4). Generative tasks show a gain of ‡https://aihub.or.kr/ ![5_image_0.png](5_image_0.png) Loss up to 0.6 R1 score whereas the discriminative task shows a gain of 0.7 F1 score. ## 5 Discussions 5.1 Compute-Optimality Analysis To investigate whether the model configurations we experimented with are optimal given our pretraining compute budget and the pretraining tokens, we conducted compute-optimality analyses, similar to the work done by Hoffmann et al. (2022). The loss curves of our pretraining experiments were smoothed and interpolated as shown in Figure 3a. Using the curves, we map out the optimal model sizes for each given compute level. However, due to the very small number of model-size samples, we need to normalize the optimal model-size data points by selecting the mid-point of each optimality segment§, as shown in Figure 3b. After fitting the regression line (r 2 = 0.988), we discover that the size of our largest model lies very close to the predicted optimal model size for our compute budget. Moreover, the predicted optimal model size (dashed line in the same figure) for the decoderonly architecture is significantly smaller (at 448M), but our benchmark results on NSMC (Table 3) show that small-scale decoder LMs (i.e., HyperCLOVA) falls behind in terms of performance, supporting the notion that seq2seq architectures are more economically viable for small and mediumscale compute budgets¶. ## 5.2 Practical Advantages Of Seq2Seq Apart from the quantitative benefits in performance and efficiency demonstrated throughout the paper, Seq2Seq offers additional practical and real-world benefits. First, the encoder-decoder framework produces a parameter-efficient **text encoder as a byproduct**, which can be utilized for extracting features and encoding purposes (Ni et al., 2022; Liu et al., 2021). Specifically, the encoder module extracted from seq2seq is capable of producing highquality text embeddings superior to ones produced from encoders of similar sizes (Ni et al., 2022). Second, the text-to-text architecture reduces the software complexity and management costs for large-scale deployment, as a result of (1) the unified nature of the input and output format, (2) the separation of the input and output sequences inherently supported by the encoder-decoder architecture, and (3) better parameter-efficiency. This translates to fewer engineering resources to support the same level of deployment scalability. The unified text nature of the data format allows existing deployment infrastructures to be easily expanded to handle new tasks. Also, the inherent distinct twopart architecture enables simpler and more streamlined serving infrastructure. Furthermore, due to the steeper model-scaling curve exhibited by decoder architectures, text-to-text transformers incur fewer operating costs to maintain the same quality of services. ## 6 Conclusion In this paper, we introduced HyperT5 and DialogHT5 as state-of-the-art on Korean language modeling. We also demonstrated the feasibility of performing resource-aware strategization for language models. Through the compute-optimality analyses, we found that the seq2seq architecture may be more cost-efficient than decoders below a certain compute-budget threshold. For future work, we wish to generalize the domain adaptation approach and study the efficacy of multi-task learning (Aribandi et al., 2022) from the industrial perspective. Furthermore, we look forward to conducting comprehensive investigations into cross-architectural optimality. ¶Conversely, this means that decoder-only architectures scale better with larger compute budgets (Figure 3b). ## Ethics Statement The authors are aware that the language models proposed in this paper, either pretrained from our pretraining corpus or heavily fine-tuned using the dialog corpus (Appendix B.1), are all subject to social and unethical biases depending on the way the corpora were prepared. Internally, the authors and the relevant members of the affiliated organization are actively working to make sure that the deployed language models do not generate ethically questionable content that may cause harm or stress to the end user. The specific set of actions that we take include but are not limited to, - Employing automated models to detect unethical content and perform automatic adversarial attacks on the language model before deploying them into services and products. - Under safe and strict ethical guidelines, conducting human studies to identify prompts that could potentially cause the language model to generate unethical content. (red-teaming) - Establishing strategies to mitigate or amend ethical issues exhibited by the language models raised from automated and human surveys. ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In *International Conference on Learning Representations*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Zhi Chen, Jijia Bao, Lu Chen, Yuncong Liu, Da Ma, Bei Chen, Mengyue Wu, Su Zhu, Jian-Guang Lou, and Kai Yu. 2022. Dialogzoo: Largescale dialog-oriented task learning. *arXiv preprint* arXiv:2205.12662. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mukhlish Fuadi, Adhi Dharma Wibawa, and Surya Sumpeno. 2023. idt5: Indonesian version of multilingual t5 transformer. arXiv preprint arXiv:2302.00856. Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha. 2021. Dialogbert: Discourse-aware response generation via learning to recover and rank utterances. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12911–12919. Raghav Gupta, Harrison Lee, Jeffrey Zhao, Yuan Cao, Abhinav Rastogi, and Yonghui Wu. 2022. Show, don't tell: Demonstrations outperform descriptions for schema-guided task-oriented dialogue. In *Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pages 4541–4549, Seattle, United States. Association for Computational Linguistics. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. Training compute-optimal large language models. *arXiv* preprint arXiv:2203.15556. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In *International Conference on* Learning Representations. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411–4421. PMLR. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Jeon Dong Hyeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, Woomyoung Park, and Nako Sung. 2021a. What changes can large-scale language models bring? intensive study on HyperCLOVA: Billions-scale Korean generative pretrained transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3405–3424, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. San Kim, Jin Yea Jang, Minyoung Jung, and Saim Shin. 2021b. A model of cross-lingual knowledgegrounded response generation for open-domain dialogue systems. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 352–365, Punta Cana, Dominican Republic. Association for Computational Linguistics. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, et al. Pytorch distributed: Experiences on accelerating data parallel training. *Proceedings of the VLDB Endowment*, 13(12). Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019. Korquad1. 0: Korean qa dataset for machine reading comprehension. *arXiv preprint arXiv:1909.07005*. Frederick Liu, Siamak Shakeri, Hongkun Yu, and Jing Li. 2021. Enct5: Fine-tuning t5 encoder for non-autoregressive tasks. *arXiv preprint* arXiv:2110.08426. Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022a. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In *Advances in Neural Information Processing Systems*. Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2022b. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61–68, Dublin, Ireland. Association for Computational Linguistics. Antonis Maronikolakis and Hinrich Schütze. 2021. Multidomain pretrained language models for green NLP. In *Proceedings of the Second Workshop on Domain* Adaptation for NLP, pages 1–8, Kyiv, Ukraine. Association for Computational Linguistics. Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, and Maxine Eskenazi. 2019. Pretraining methods for dialog context representation learning. In *Proceedings of the 57th Annual Meeting of the Association for* Computational Linguistics, pages 3836–3845, Florence, Italy. Association for Computational Linguistics. El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. 2022. AraT5: Text-totext transformers for Arabic language generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 628–647, Dublin, Ireland. Association for Computational Linguistics. Jianmo Ni, Gustavo Hernandez Abrego, Noah Constant, Ji Ma, Keith Hall, Daniel Cer, and Yinfei Yang. 2022. Sentence-t5: Scalable sentence encoders from pretrained text-to-text models. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1864–1874, Dublin, Ireland. Association for Computational Linguistics. Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Jiyoon Han, Jangwon Park, Chisung Song, Junseong Kim, Yongsook Song, Taehwan Oh, et al. 2021. Klue: Korean language understanding evaluation. arXiv preprint arXiv:2105.09680. David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. *arXiv* preprint arXiv:2104.10350. Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(1). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Eric Michael Smith, Y-Lan Boureau, and Jason Weston. 2021. Recipes for building an open-domain chatbot. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman ´ Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of *Proceedings* of Machine Learning Research, pages 4596–4604. PMLR. Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. LST: Ladder side-tuning for parameter and memory efficient transfer learning. In *Advances in Neural* Information Processing Systems. Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. 2022. Transcending scaling laws with 0.1% extra compute. *arXiv preprint* arXiv:2210.11399. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language models are zero-shot learners. In *International Conference on Learning Representations*. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Wei Zeng, Xiaozhe Ren, Teng Su, Hui Wang, Yi Liao, Zhiwei Wang, Xin Jiang, ZhenZhang Yang, Kaisheng Wang, Xiaoda Zhang, et al. 2021. Pangu-α: Largescale autoregressive pretrained chinese language models with auto-parallel computation. *arXiv* preprint arXiv:2104.12369. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT : Large-scale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:* System Demonstrations, pages 270–278, Online. Association for Computational Linguistics. ## A Supplementary Materials Regarding Hypert5 Pretraining | Data Type | Tokens | Ratio | |-------------|----------|---------| | Web-Crawled | 183.0B | 61.0% | | Books | 61.2B | 20.4% | | News | 55.8B | 16.8% | | Others | 0.3B | 0.1% | | Korean | 238.5B | 79.5% | | English | 61.5B | 20.5% | ## A.1 Experiments On Corpus Composition Re-Sampling Table 5: Data composition of the experimental pretraining corpus by data type and language. This experimental pretraining corpus of HyperT5 was designed to contain a relatively higher proportion of knowledge-intensive data sources such as books and news. Moreover, a higher English proportion was employed to leverage high-resource language and promote the emergence of cross-lingual knowledge transfer. | Model (corpus) | NSMC | YNAT | KLUE-NLI | KLUE-STS | Abs. Summ. | Avg. | |---------------------------------|--------|--------|------------|---------------|-----------------------|--------| | Metrics | Acc. | F1 | F1 | F1 / Pearson | R1 / R2 / RL | | | HyperT5BASE (original corpus) | 91.52 | 87.72 | 85.25 | 84.95 / 93.27 | 52.70 / 24.81 / 49.53 | 71.22 | | HyperT5BASE (re-sampled corpus) | 91.51 | 87.15 | 84.10 | 85.50 / 93.49 | 53.64 / 25.24 / 50.27 | 71.36 | Table 6: Pretraining results on the experimental corpus re-sampled with an emphasis on knowledge-heavy data. Note that the pretraining setup is slightly different from the one described in the main section of the paper, hence the results of the base model may differ. During pretraining, language models consume billions of weakly preprocessed tokens, which may impact the model performance to varying degrees. For example, web-crawled data which take up a large portion of the pretraining corpus is relatively noisy, thus prioritizing certain data sources that are thought to be dense with information may help to improve training efficiency, i.e., the number of tokens needed to converge towards a reasonable level of performance. To investigate our hypothesis, we create an experimental pretraining corpus (Table 5) and made sure that the proportions of data sources that provide "hard" knowledge (e.g., books and news) are significantly higher by under-sampling other data types (Rae et al., 2021). Additionally, we augment the experimental pretraining corpus with English data sources, leveraging knowledge embedded in the world's most resource-accessible language. We theorized that the availability of weak-parallel corpora such as multi-lingual Wikipedia articles acts as mediums for cross-lingual knowledge transfer (Hu et al., 2020). As shown in the results (Table 6), the base version of our model improved in reading comprehension and abstract summarization tasks but slightly suffered in classification and NLI tasks, suggesting that composition of knowledge-oriented data sources in the pretraining corpus may help the language model in tasks related to language generation (answer generation and summary generation) at the risk of slight underfitting in discriminatory power. ## A.2 Benchmark Datasets This section provides more details on the benchmark datasets. - **NSMC**|| is a movie review dataset constructed from NAVER Movie, a Korean movie review website, and consists of 150k training data and 50k test data samples, labeled with positive and negative classes. - **YNAT** is a news-topic classification dataset as a part of the Korean Language Understanding Evaluation (KLUE) (Park et al., 2021) benchmark set. The dataset consists of titles for news articles ||https://github.com/e9t/nsmc and the corresponding news topic labels. The dataset has 45.6k training data samples, 91k validation data samples, and 91k test data samples. - **KLUE-NLI** is a natural language inference dataset from the KLUE benchmark set. Similar to MNLI (Williams et al., 2018), each sample in the dataset contains a pair of premise-hypothesis sentences, and the goal is to label the pair with one of "entailment", "neutral", or "contradiction". It comprises 25k training data, 3k validation data, and 3k test data samples. - **KLUE-STS** is designed to evaluate a model's ability to capture the semantic similarity between two sentences. Like YNAT and KLUE-NLI, KLUE-STS is also a part of the KLUE benchmark. The dataset consists of 11.6k training data, 519 validation data, and 1k test data samples. - **KorQuAD** is a Korean question Answering dataset for machine reading comprehension (Lim et al., 2019), similar to SQuAD (Rajpurkar et al., 2016). The dataset consists of 60k question/answer pairs for training, 5.8k for validation, and 3.9k for testing. ## B Supplementary Materials Related To Dialog-Oriented Adaptation This appendix section contains supplementary materials related to the dialog-oriented adaption of HyperT5. ## B.1 Dialog-Oriented Adaptation Corpus Here is the detailed list of data sources for constructing the dialog heavy-finetuning corpus presented (Table 7). The data consist of both open-sourced (Modu, AI Hub) and proprietary dialog corpus. Modu datasets are a collection of various dialog-oriented datasets collected by National Institute of Korean Language (NIKL)**. | Source | Dataset | Dialog Type | Domain | # Dialogs | # Turns | |-------------------------------------|----------------------|-----------------------|--------------------|-------------|-----------| | Modu | TV Series, News | Spoken | Broadcast Contents | 0.1M | 2.2M | | Open-ended dialogs | Spoken | General | 51.1k | 1M | | | SNS dialogs | Written | General | 24.2k | 0.5M | | | Online communications | Written | Online Communications | 98k | 1.7M | | | Korean parliamentary records | Spoken | Politics | 0.3M | 5.5M | | | AI Hub | Customer service QAs | Spoken | Customer Service | 6.7k | 0.1M | | Empathetic dialogs | Spoken | Empathetic dialog | 45.5k | 0.3M | | | Dialog summarization | All | General | 0.3M | 3.5M | | | Open-ended SNS dialogs | Written | Online Communications | 1.8M | 28.6M | | | Shopping, Public sector, Finance QA | Spoken | Customer Service | 0.1M | 1.9M | | | Proprietary | TV Series, News | Spoken | Broadcast Contents | 0.2M | 3.3M | | Shopping QAs | Written | Customer Service | 0.2M | 1M | | | Elderly care dialogs | Written | Empathetic dialog | 40k | 0.4M | | | Character chatbot dialogs | Written | Empathetic dialog | 32.1k | 0.3M | | | Total | 3.3M | 50.2M | | | | Table 7: Full list of data sources and corresponding statistics for dialog-oriented heavy-fine-tuning. ## B.2 Ai Hub Benchmark Datasets This section provides more details on the dialog-oriented benchmark datasets. Note that the benchmark datasets are excluded from the dialog adaptation corpus. - **AI Hub ToD** is a task-oriented dialog (ToD) dataset from AI Hub††, which covers 20 different topics (restaurant booking, online shopping QA, etc.). We preprocess the corpus to build 38.5k of training data and 3.9k of test data. We used the ToD dataset for dialog in-filling and dialog response **https://corpus.korean.go.kr/ ††https://bit.ly/3S9Wxi6 generation. Each dialog session produces 3 or 4 training samples by random utterance selection. For the dialog response generation task, the turns before the selected utterance are only used for the dialog context, whereas both turns before and after the selected utterance are given as the context for dialog infilling. We also benchmarked the dialog topic classification using the ToD dataset with topic labels. - **AI Hub ODD** is an open-domain dialog (ODD) dataset from AI Hub‡‡, over 20 different topics (social issues, food, marriage, etc.). We built a dataset with 87.7k training data and 11.0k of test data for the aforementioned tasks in AI Hub ToD. Similarly to AI Hub ToD, each dialog session results in multiple training samples. - **AI Hub Script-Summ** is a broadcasting media transcript summarization dataset from AI Hub. We built a dataset with 84.4k training data and 10k test data for dialog summarization. Finally, we use the ROUGE score for generation task evaluation and Macro F1-Score for classification. ## B.3 Scarce Data Benchmark Results Table 8 illustrates the scarce data benchmark results for our dialog-adapted models against HyperT5 models as baselines. We averaged the experimental results over five different random seeds. All the experiments are under the early stopping option with a patience level of 5. For each experiment, the best checkpoint is determined according to the evaluation metric. We set the learning rate to 5e-4 with linear learning decay. Note that DialogHT5 shows a huge performance leap in DI tasks. This can be explained by the fact that dialog in-filling is essentially a single utterance masking (SUM), hence the MUM objective we used for dialog adaptation is a more challenging version of dialog in-filling. In the extreme data-scarce settings (i.e., the training sample number of 0.1k), both HyperT5 and DialogHT5, regardless of the model size, tend to fail training without hyperparameter tuning on the learning rate. In general, using 5e-4 instead of 5e-5 enables tuning to begin working. ## B.4 Full Data Benchmark Results We further conduct the full data benchmarks. Results show that DialogHT5 models achieve higher scores compared to HyperT5 models in most cases (Table 9). | Model | Params. | # Samples | DI | DR | DS | DC | | | |--------------------------------|-----------|-------------|-------|-------|-------------|-------|-------|-------| | Dataset | ToD | ODD | ToD | ODD | Script-Summ | ToD | | | | Metrics | R1 | R1 | R1 | R1 | R1 | F1 | | | | HyperT5SMALL | 97M | 100 | 13.69 | 8.58 | 13.38 | 8.82 | 27.03 | 30.13 | | 500 | 19.68 | 13.83 | 16.15 | 10.52 | 35.35 | 51.86 | | | | 1000 | 22.31 | 13.93 | 17.80 | 9.71 | 33.45 | 56.49 | | | | 5000 | 27.71 | 15.06 | 21.82 | 12.32 | 35.54 | 66.85 | | | | HyperT5BASE | 277M | 100 | 22.26 | 16.24 | 12.72 | 10.72 | 28.86 | - | | 500 | 27.42 | 17.10 | 16.35 | 9.99 | 36.92 | 43.67 | | | | 1000 | 29.05 | 17.43 | 20.84 | 10.68 | 38.07 | 53.51 | | | | 5000 | 31.78 | 18.71 | 24.33 | 12.80 | 41.75 | 63.47 | | | | HyperT5LARGE | 822M | 100 | 26.40 | 19.17 | 10.36 | 10.00 | 25.36 | 52.54 | | 500 | 30.02 | - | 18.14 | 10.67 | 36.04 | - | | | | 1000 | 31.28 | - | 17.79 | 13.05 | 37.79 | - | | | | 5000 | 33.40 | - | 23.26 | 14.38 | 41.16 | 49.64 | | | | Data Adaptation DialogHT5SMALL | 97M | 100 | 19.66 | 16.10 | 14.71 | 11.42 | 18.86 | 32.15 | | 500 | 26.56 | 18.05 | 18.03 | 12.55 | 31.98 | 54.66 | | | | 1000 | 28.22 | 18.25 | 19.92 | 12.82 | 33.10 | 53.65 | | | | 5000 | 30.15 | 18.46 | 22.32 | 12.75 | 34.38 | 65.50 | | | | DialogHT5BASE | 277M | 100 | 28.45 | 20.69 | 15.02 | 11.15 | - | - | | 500 | 31.82 | 21.35 | 21.33 | 13.05 | - | 43.32 | | | | 1000 | 32.38 | 21.51 | 23.14 | 14.23 | - | 56.25 | | | | 5000 | 33.48 | 21.66 | 25.17 | 15.16 | 41.73 | 65.36 | | | | DialogHT5LARGE | 822M | 100 | 29.46 | 21.13 | 13.55 | 4.62 | - | - | | 500 | 32.56 | 18.67 | 22.55 | 14.07 | - | - | | | | 1000 | 33.31 | - | 23.17 | 11.13 | 37.84 | - | | | | 5000 | - | - | 25.49 | 15.20 | 41.28 | 64.31 | | | | Model | Params. | DI | DR | DS | DC | | | |--------------------------------|-----------|------|------|------|-------------|------|------| | Dataset | ToD | ODD | ToD | ODD | Script-Summ | ToD | | | Metrics | R1 | R1 | R1 | R1 | R1 | F1 | | | HyperT5SMALL | 97M | 37.0 | 21.3 | 26.9 | 15.8 | 42.9 | 70.2 | | HyperT5BASE | 277M | 38.8 | 23.1 | 29.7 | 16.2 | 44.6 | 70.4 | | HyperT5LARGE | 822M | 41.5 | 24.5 | 30.2 | 16.5 | - | 71.0 | | Data Adaptation DialogHT5SMALL | 97M | 37.2 | 21.7 | 27.3 | 15.8 | 42.9 | 70.0 | | DialogHT5BASE | 277M | 40.1 | 23.6 | 29.8 | 16.4 | 44.7 | 71.7 | | DialogHT5LARGE | 822M | 41.8 | 25.1 | 30.5 | 16.8 | - | 68.6 | Table 9: Full data benchmark results for dialog adaptation.
kim-etal-2023-semantic
Semantic Ambiguity Detection in Sentence Classification using Task-Specific Embeddings
https://aclanthology.org/2023.acl-industry.41
Ambiguity is a major obstacle to providing services based on sentence classification. However, because of the structural limitations of the service, there may not be sufficient contextual information to resolve the ambiguity. In this situation, we focus on ambiguity detection so that service design considering ambiguity is possible. We utilize similarity in a semantic space to detect ambiguity in service scenarios and training data. In addition, we apply task-specific embedding to improve performance. Our results demonstrate that ambiguities and resulting labeling errors in training data or scenarios can be detected. Additionally, we confirm that it can be used to debug services
# Semantic Ambiguity Detection In Sentence Classification Using Task-Specific Embeddings Jong Myoung Kim1,2 Young-Jun Lee2 Sangkeun Jung3 **Ho-Jin Choi**2 1SK-telecom 2School of Computing, KAIST 3The Division of Computer Convergence, Chugnam National University jmkim71@sk.com {yj2961,hojinc}@kaist.ac.kr hugman@cnu.ac.kr ## Abstract ![0_Image_0.Png](0_Image_0.Png) Ambiguity is a major obstacle to providing services based on sentence classification. However, because of the structural limitations of the service, there may not be sufficient contextual information to resolve the ambiguity. In this situation, we focus on *ambiguity detection* so that service design considering ambiguity is possible. We utilize similarity in a semantic space to detect ambiguity in service *scenarios*1 and *training data*. In addition, we apply taskspecific embedding to improve performance. Our results demonstrate that ambiguities and resulting labeling errors in training data or scenarios can be detected. Additionally, we confirm that it can be used to debug services. ## 1 Introduction The ability to accurately access the meanings of sentences is a key component of voice recognitionbased agent services. This task is made difficult by the inherent ambiguity of some sentences, which can refer to different meanings in different contexts. Adot( ) 2is a voice recognition-based service agent, akin to Amazon's Alexa, for Korean users. This paper describes how we dealt with ambiguity, from a perspective of **Adot**'s developer. For example, as illustrated in Figure 1, the Korean word "앞" has two different meanings depending on the context. As a result, when the media content is being played, the command "앞으로 가봐" may have two completely opposite meanings.("move forwards" vs. "move to previous"). In a test conducted within our company, approximately 61.7 percent of respondents interpreted it as "move forwards" and 38.3 percent as "move to previous". Although there is no universally valid interpretation in this scenario, each respondent was certain that their own interpretation was the correct one. (Likert-scale score = 5.2/7). This semantic ambiguity leads to a labeling error of training data and scenarios in specification3. From the perspective of service providers who must provide specific services in response to user input, we contend that this is a challenging problem. In addition, ambiguity is very important in terms of cost. Manual inspection of the training data or specification is required for ambiguity handling, which is expensive in terms of cost and resources. Recently, many researchers have attempted to handle ambiguity with various approaches such as semantic space (Rodd, 2020), entity linking (Yin et al., 2019), and attribute attention (Liu et al., 2019). In addition, the primary purpose of these studies is to resolve ambiguity using context. In spite of this, we do not have sufficient contextual information to apply these disambiguation methods because of the characteristics of a very short input sentence (which has 6.97 Korean letters and 2.05 words on average) and a one-turn based service. Although contextual information required for disambiguation is not available within the scope 3We call the set of scenarios as the specification. 425 of the sentence classifier, we can still exploit nonlinguistic traits, such as user habits and device information, which are accessible from the outside classifier to determine service. Therefore, we chose to approach this problem through detection rather than disambiguation. Detecting the presence of ambiguity in scenarios would allow for the consideration of external contextual information when providing the service. This paper proposes a process of detecting ambiguity in scenarios or training data through similarity. To improve ambiguity detection performance, we apply task-specific embedding. We conduct an experiment detecting ambiguity and labeling error from ambiguity in **Adot**'s scenarios and training data. Additionally, we investigate Pearson correlation coefficients between similarity and the degree to which users expect two sentences to belong to the same class (this is referred to as user-aware class relevance). This study's findings affirm that similarity provides a means to identify scenarios and training data with potential ambiguity. In this process, we confirm that user-aware class relevance correlates with the similarity. Furthermore, it was confirmed that the training data that resulted in the misclassification could be specified. ## 2 Related Works Ambiguity is a long-standing problem in natural language processing (NLP) tasks such as word sense disambiguation (Navigli, 2009), entity disambiguation (Barba et al., 2022), and database search result disambiguation (Qian et al., 2021) in the taskoriented dialogue systems. Ezzini et al. (2021) utilized domain-specific data to address the structural ambiguity of sentences. This ambiguity is largely divided into four categories (Berry et al., 2003). - Lexical ambiguity occurs when a word has several meanings - Syntactic ambiguity occurs when a given sequence of words can be given more than one grammatical structure - Semantic ambiguity occurs when a sentence has more than one way of reading it - Pragmatic ambiguity occurs when a sentence has several meanings in the context in which it is uttered. These studies are focused on resolving the ambiguity. However, we did not have contextual information to apply to these disambiguation methods. In NLP, the sentence similarity task, which evaluates the similarity between pairs of sentences using several techniques, has been investigated extensively. Traditionally, edit distance has been used to quantify superficial similarities (Levenshtein et al., 1966). Despite being straightforward, it is noteworthy because of its similarity to manual identification of training data that induce misclassification. Recent studies have measured the similarity between pairs of sentences by projecting them into a meaningful semantic space using various neural networks, such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018), or using Siamese architectures, such as SentenceBERT (Reimers and Gurevych, 2019). In this paper, inspired by (Ezzini et al., 2021), We utilize a task-specific embedding, extracted from a trained classification model, to identify semantic ambiguities. ## 3 Task Design 3.1 Ambiguity Detection In Scenarios When ambiguity exists in scenarios, the accurate classification result is not aligned with the user's expectations. If the user expects a class (class-B) other than the class (class-A) defined in the specification to be correct for an input a, it implies that classification of the input under class-B is sufficiently valid or that the user has experienced receiving a service corresponding to class-B for at least one input a′similar to a. Let's suppose that the command "앞으로 가봐" we looked at in section1 is defined as "move forward" in the specification. A significant proportion, 38.3%, interpret this command as "move to previous", and the similar phrase "앞으로 다시 가봐" is commonly used in Korea with the meaning "move to previous". (The interjected word '다시' usually means 'again'.) From the perspective of the user, we attempted to identify the ambiguity of the scenarios. First, pairs of sentences are collected, and the correlation between similarity and user-aware class relevance is evaluated. If this correlation is sufficiently high, user-aware class relevance could be estimated based on similarity. In a given scenario, if specific training data y that has the highest similarity with a particular scenario sentence x is placed in a different class than the class-X defined for scenario sentence x, it implies that the user may not anticipate class-X for sentence x and ambiguity may exists in the scenario. Subsequently, ambiguity or the mislabeled scenario is manually inspected in the specified scenarios to validate this process. ## 3.2 Ambiguity Detection In Training Data The method of 3.1 could not be applied as it is to detect the ambiguity of the training data. In the process of generating training data, several similar data are generated all at once. Ambiguous training data is rarely observed in isolation; furthermore, the data demonstrate a marked degree of similarity to one another. Finding similar data with different classes for single data did not work well. We noted that the ambiguity in training data manifests as mislabeled data that do not conform to the specification. Models trained using this mislabeled data yield outputs that deviate significantly from the designer's expectation. In this paper, we begin with misclassified inputs and attempt to find the training data that caused the corresponding misclassification and mistraining. Hence, input sentences misclassified by **Adot** are collected. Among all training data with misclassified labels, data exhibiting high similarity with misclassified inputs are assumed to be training data that induce misclassification. These include corresponding data with ambiguity or those mislabeled from ambiguity. We inspect the specified data manually to confirm any ambiguity or labeling errors. Additionally, the labels of the specified data are modified to the labels of the expected classes, and the model is retrained. It is verified whether the classification results of the model are corrected as expected. Via this, we confirm whether the specified data are the cause of the misclassification. ## 4 Methods - Task-Specific Embeddings We note that each node of the trained classification model stores weights adjusted for the task, and the values output by each node in the inference process contain fragmentary information useful for inference (Wang et al., 2020). We create task-specific embedding vectors using the output values of each node in the classification model. Via this process, we expect to be able to create an embedding with better expressive power for ambiguity detection, albeit biased, compared to off-the-shelf embeddings, such as pre-trained LM or Word2Vec. Assuming a classification model fθ with n layers on top of the embedding layer, we create a vector vi representing the i-th layer as follows: (v1 = embedding vector, vn = model output vector.) Whole Layer Vector (WLV) WLV consists of output of all nodes from the model's embedding layer to the layer below the final output layer. WLV contains the most amount of data and reflects all information flows in the model. ## W Lv = Concat(V1, V2, ..., Vn−2, Vn−1) After Representation Layer Vector (ARLV) WLV contains considerable information, but the embedding layer accounts for most of it, which is problematic (for instance, 25,640 nodes out of a total of 40,424 nodes are included in the embedding layer of **Adot**'s classification model). ARLV is defined using nodes from the layer following the embedding layer to the layer immediately preceding the output layer to avoid undue influence of the embedding layer. ARLV = concat(v2, v3, ..., vn−2, vn−1) Conclusion Layer Vector (CLV) Although WLV and ARLV contain a significant amount of data, they ignore discrete functions and consequently struggle with the corresponding differences. For example, when a layer uses an activation function such as RELU, the difference in the value input to the function might be negligible. However, the resultant value after processing the function might be completely different. CLV is only defined using the layer immediately preceding the output layer of the entire model to account for these characteristics. We anticipate that the overall model's judgment process is structured in this layer. CLV = vn−1 ## 5 Experiments 5.1 Datasets 5.1.1 Scenario Ambiguity Test Set The following experiment utilizes the task described in section 3.1 to detect scenario ambiguities from the user's perspective. Correlation Test Dataset 913 pairs of sentences are collected from domain classification validation data for the **Adot** service (Domain information is used as a label in this experiment). Domains, in the context of **Adot**, are defined service areas, and include categories consisting of 30 categories such as general, music, video, weather, and schedule. Sentences are randomly selected from the data of eight domains that frequently result in classification failure. The classes of each pair may be identical or different. We measure Pearson correlation between similarity and user-aware class relevance of sentence pair. Ambiguity Test Dataset To evaluate ambiguity detection, 29,136 sentences with designer-defined labels are collected. These sentences are scenarios defined for the **Adot** service. For these scenarios of specification, the scenarios with possible ambiguity are detected using the process mentioned in section 3.1. Lastly, any actual ambiguity or labeling error corresponding to the specified scenarios is manually inspected. ## 5.1.2 Training Data Ambiguity Test Set The following experiment based on the task designed in section 3.2 is performed to detect ambiguity in training data. Ambiguity Test Set This test dataset consists of 2,300 sentences incorrectly classified by **Adot**. Based on the misclassified sentences, training data exhibiting high similarity with these sentences are assumed to be training data that caused the misclassification. We manually check for ambiguity or labeling errors in those specific sentences. Side-Effect Test Set Identification of data that induces misclassification also requires that inputs analyzed successfully before correction must must remain so after correction. To confirm this, 20,000 sentences classified correctly by **Adot** are collected, and their classification result is verified after the specified data are modified and retrained. ## 5.2 Baseline - Similarity Methods Edit Distance A method for measuring the superficial similarity. It is very similar to the manual debugging process. Embedding Vector Similarity A method for measuring the semantic similarity. Cosine similarity of embedding vectors for sentences is measured. Two types of representative off-the-shelf embeddings are prepared:Sent2Vec and KoGPT2. Sent2vec, was trained on both Korean Wikipedia and **Adot**'s training data. KoGPT2 is a pre-trained language model developed by SKT-AI, based on GPT-2. These embeddings are also used to comprise an embedding layer for the classification models discussed below. We describe the embedding methods in detail in the Appendix A. ## 5.3 Classification Models Used In The Experiments We employed classification models to generate taskspecific embeddings and assess the detection performance of training data that cause misclassification. Adot **Classifier** This is the model used for domain classification in **Adot**'s service. It is formed by concatenating a convolutional neural network (CNN) that uses a part of speech-tagged morpheme as a token with another CNN that uses character and dictionary information as a token followed by three fully connected layer on top. KoGPT2 Classifier It is a classifier based on pretrained-LM. As in the basic classifiers of preLM packages, three fully connected layers are added on top of KoGPT2. These models are trained using training data for domain classification of **Adot**' service. The data consists of 2.2 M sentences with 30 corresponding classes. We set the batch size to be 256 and finetune the model for 15 epochs using a learning rate of (1e-5). On the development dataset, the **Adot** and KoGPT classification models achieved accuracy rates of 98.4% and 94.2%, respectively. Because presenting a high-performance model is not within the scope of this study, a detailed description of the model is included in the Appendix A.2. ## 5.4 Human Evaluation The task designed in section 3.1 includes the user test. The evaluator rates the degree to which two sentences belong to the same class on a scale of 1–7. An example of the test set given to the evaluator is included in the appendix B.2. For this experiment, two groups of evaluators are recruited. Ordinary Service User (User) This group consists of six ordinary **Adot** users. Each user exhibits above-average linguistic proficiency and possesses a bachelor's degree or higher academic qualifications. Through correlation with this group, we checked whether user-aware class relevant could be estimated with similarity. Expert Evaluator (Expert) This group consists of seven people with experience in generating training data or evaluating quality of service for **Adot**. Each participant exhibits five years of annotation experience on average and demonstrates a thorough understanding of class boundaries. ## 5.5 Evaluation Metrics Experimental performance is evaluated in terms of specific metrics. If the data requires correction but is not ambiguous, it is considered to be a simple mislabeling error. In our experiment, the terms commonly used to define the formulation for evaluation metrics are Nx, s, and d, where Nx, s, and d denote the number of x ∈ {*s, d*}, scenario or training data identified as having an ambiguity, and input data, respectively. Ambiguous Data Ratio (ADR) measures the ratio of scenario or training data containing ambiguity in the identified scenarios or training data, Nsa/Ns, where Nsa and Ns denote the number of sa and the number of s, respectively. sa represents the scenario or training data with actual ambiguity among s. The existence of ambiguity was confirmed by a person directly checking if multiple interpretations were possible. Mislabeled Data from the ambiguity Ratio (MDR) measures the ratio of scenario or data containing ambiguity and labeling error in the identified scenario or data, Nsm/Ns. sm denotes scenario or training data with actual ambiguity and labeling error among s. Mislabeled Data (sm) was verified by a person directly checking the need for label correction. Correction Ratio (CR) Measure the classification success rate for retrained model on previously misclassified sentences after modifying data and retraining, Ndf /Ndfs . df represents input data misclassified by **Adot**, while dfs denotes data correctly classified after correction and retraining from within the df . Accuracy preservation Ratio (APR) measures the classification success rate for previously successfully analyzed sentences after modifying the data and retraining, Ndss /Nds . dss denotes input data from the ds that was correctly classified after correction and retraining, while ds represents data successfully classified by **Adot**. ## 5.6 Results Pearson Correlation Analysis As shown in Table 1, we show the result of pearson correlation analysis between user-aware class relevance and the measured similarities. In general, various similarity methods achieve significant positive correlations 4, which suggests that similarity methods can be used as a proxy estimator for user-aware class relevance. This result is regarded as an evidence of our hypothesis described in Section 3. This result reinforces our hypothesis that presenting data with high similarities but differing labels can aid in identifying ambiguous scenarios. Moreover, similarity methods more correlates with users than experts who know the service criteria accurately. Following (Pang et al., 2020), we also measure the mean inter-rater agreement. The study results indicate high agreement and yield two conclusions. Firstly, the methods utilized, particularly Adot-CLV, demonstrate a correlation almost equivalent to the average inter-rater correlation (0.589 vs. 0.633, respectively). Secondly, the manual evaluation explanation can be considered sufficient. Similarity with ordinary users was significantly higher than with experts. To ascertain the underlying cause of this phenomenon, we computed the concordance of each sentence pair at three distinct points, namely the service area, entity properties, and predicate. We investigated the correlation between manual evaluation and concordances. Notably, the expert group demonstrated a markedly high correlation in the service area, whereas the ordinary user group exhibited comparable correlations across all three areas. Further elaboration on this matter is presented in the appendix C, as it is not directly pertinent to the primary focus of our paper, namely ambiguity detection. Specification Ambiguity Detection As shown in Table 1, we presents the ambiguity and labeling errors resulting from ambiguity detected in the scenarios. Since the specification is the gold standard for services, it is mostly comprised of scenarios that have a clear meaning. Nonetheless, we can identify scenarios that contain ambiguity or require label modification. Interestingly, the number of data between containing actual ambiguity ((sa) = (Ns × ADR), KoGPT2+ARLV- (sa) = 210, **Adot**+ARLV-(sa) = 152) or requiring modification ((sm) = (Ns × MDR), Sent2Vec- (sm) = 40, **Adot**+ARLV-(sm) = 36) are not different significantly considering the number of scenar- | Task → | Correlation | Ambiguity in specification | Ambiguity in training data | | | | | | | | |------------------------|---------------|------------------------------|------------------------------|------|-------|-------|-------|-------|-------|-------| | Methods ↓ | All ↑ | Expert ↑ | User ↑ | Ns | ADR ↑ | MDR ↑ | ADR↑ | MDR↑ | CR↑ | APR↑ | | Edit Distance | -0.243 | -0.184 | -0.286 | 755 | 0.170 | 0.033 | 0.660 | 0.373 | 0.559 | 0.990 | | Embedding + Sent2Vec | 0.217 | 0.154 | 0.275 | 4448 | 0.037 | 0.005 | 0.612 | 0.315 | 0.496 | 0.992 | | + KoGPT2 | 0.268 | 0.155 | 0.394 | 1531 | 0.223 | 0.024 | 0.611 | 0.318 | 0.490 | 0.990 | | Task-Specific Adot+WLV | 0.237 | 0.170 | 0.294 | 2968 | 0.056 | 0.007 | 0.610 | 0.316 | 0.474 | 0.992 | | Adot+ARLV | 0.565 | 0.468 | 0.604 | 362 | 0.419 | 0.102 | 0.712 | 0.458 | 0.689 | 0.989 | | Adot+CLV | 0.589 | 0.489 | 0.628 | 243 | 0.469 | 0.132 | 0.711 | 0.463 | 0.599 | 0.992 | | KoGPT2+WLV | 0.336 | 0.227 | 0.440 | 1210 | 0.169 | 0.020 | 0.599 | 0.332 | 0.491 | 0.990 | | KoGPT2+ARLV | 0.512 | 0.445 | 0.513 | 755 | 0.278 | 0.039 | 0.688 | 0.439 | 0.601 | 0.989 | | KoGPT2+CLV | 0.512 | 0.445 | 0.513 | 507 | 0.391 | 0.047 | 0.694 | 0.448 | 0.600 | 0.990 | | Inter-Rater | 0.633 | 0.752 | 0.520 | - | - | - | - | - | - | - | ios (29,136).During the scenario ambiguity detection experiment, we found a noticeable contrast in the number of identified scenarios, denoted by Ns. Task-specific embedding tended to have a lower Nsthan off-the-shelf embedding. Furthermore, the upper layer-generated embedding displayed a lower Ns. Ns determined the performance difference in ADR and MDR. Training Data Ambiguity Detection Table 1 shows the performance in training data ambiguity detection. We verify that the ambiguity and labeling error is viewed as the cause of misclassification. In practice, approximately 70% of the specified data contain ambiguity, and approximately 45% require label correction. CR and APR represent the performance of identifying training data that cause misclassification. The results also demonstrate that similarity can be used to identify training data that cause misclassification. Moreover, it is confirmed that ARLV and CLV of task-specific embeddings are more suitable for the detection task. Effect of Task-specific Embedding As reported in Table 1, task-specific embeddings outperform other methods in both ARLV and CLV, while WLV does not achieve significant improvement. Possibly, this is because off-the-shelf embeddings occupy the majority of WLV, as discussed in the definition of ARLV (See in Section 4). To understand why this phenomenon happens, we visualize the representation of training data samples obtained from each embedding method using T-SNE. As illustrated in Figure 2, the task-specific embeddings (i.e., ARLV and CLV) clearly represent the training data samples compared to general embeddings. The task-specific embeddings based on the **Adot** model outperformed those based on KoGPT2. However, we do not consider the transformer-based embedding to be unsuitable for the task. The observed difference in performance appears to stem from **Adot**'s specialization in classifying this data, given that **Adot** has undergone extensive optimization as part of its use in the corresponding service. The differences in classification accuracy between the **Adot** and KoGPT2 models, as mentioned in the section 5.3, support this interpretation(98% vs. 94%). Observing Figure 2, one can discern a subtle yet distinct data segregation demonstrated by the **Adot**-based embeddings as compared to those based on KoGPT2. This differential expressive capacity seems to manifest as a performance discrepancy. In the case of general embeddings, KoGPT2 exhibits greater separation than Sent2Vec, which, in practice, is reflected by the overall superior performance of Embedding-KoGPT2 compared to Embedding-Sent2Vec. ## 6 Discussions Powerful Task-Specific Representation This study demonstrates the feasibility of detecting am- ![6_image_0.png](6_image_0.png) biguity through similarity and highlights the potential benefits of utilizing task-specific embedding to enhance performance. The experimental results show a positive correlation between model size and detection performance according to the scaling law. Future research efforts for ambiguity detection will focus on generating task-specific embeddings using larger models for superior results. Multilingual Ambiguity Detection This study was conducted for Korean-based services. Most users of **Adot** are Korean, and the inputs contain many Korean characteristics. And since the commands input to the smart speaker were the data of the study, fairly short and imperative sentences are targeted. Experimentation is required to determine if similar results can be obtained in other languages and other types of sentences. ## 7 Conclusion We introduce two processes to detect training data and scenarios that induce ambiguity. Moreover, task-specific embedding is adopted to improve detection performance. Comprehensive analysis reveals that compared to off-the-shelf embeddings, Sent2Vec and KoGPT2, task-specific embedding is much more suitable for ambiguity detection. Moreover, the results demonstrate the viability of identifying training data that cause misclassification. In the future work, we aim to compare the performance of task-specific embedding using a wider range of models and node selection methods. ## Acknowledgements This research was conducted based on the experience of developing and operating the A.( ) service by SK Telecom in South Korea. This research was supported and funded by the Korean National Police Agency. [Project Name: XR CounterTerrorism Education and Training Test Bed Establishment/Project Number: PR08-04-000-21]. This work was supported by Chungnam National University. ## References Edoardo Barba, Luigi Procopio, and Roberto Navigli. 2022. Extend: extractive entity disambiguation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2478–2488. Daniel M Berry, Erik Kamsties, and Michael M Krieger. 2003. From contract drafting to software specification: Linguistic sources of ambiguity. Univ. of Waterloo, Tech. Rep. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Saad Ezzini, Sallam Abualhaija, Chetan Arora, Mehrdad Sabetzadeh, and Lionel C Briand. 2021. Using domain-specific corpora for improved handling of ambiguity in requirements. In *2021 IEEE/ACM 43rd* International Conference on Software Engineering (ICSE), pages 1485–1497. IEEE. Yoon Kim. 2014. Convolutional neural networks for sentence classification. *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP). Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In *Soviet physics doklady*, volume 10, pages 707–710. Soviet Union. Yang Liu, Jishun Guo, Deng Cai, and Xiaofei He. 2019. Attribute attention for semantic disambiguation in zero-shot learning. In *Proceedings of the IEEE/CVF* international conference on computer vision, pages 6698–6707. Chris McCormick. 2016. Word2vec tutorial-the skip-gram model. Apr-2016.[Online]. Available: http://mccormickml. com/2016/04/19/word2vectutorial-the-skip-gram-model. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26. Roberto Navigli. 2009. Word sense disambiguation: A survey. *ACM computing surveys (CSUR)*, 41(2):1– 69. Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, Kewei Tu, et al. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Kun Qian, Ahmad Beirami, Satwik Kottur, Shahin Shayandeh, Paul Crook, Alborz Geramifard, Zhou Yu, and Chinnadhurai Sankar. 2021. Database search results disambiguation for task-oriented dialog systems. *arXiv preprint arXiv:2112.08351*. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. Jennifer M Rodd. 2020. Settling into semantic space: An ambiguity-focused account of word-meaning access. *Perspectives on Psychological Science*, 15(2):411–427. Zijie J Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, and Duen Horng Polo Chau. 2020. Cnn explainer: learning convolutional neural networks with interactive visualization. *IEEE Transactions on Visualization and Computer Graphics*, 27(2):1396–1406. Xiaoyao Yin, Yangchen Huang, Bin Zhou, Aiping Li, Long Lan, and Yan Jia. 2019. Deep entity linking via eliminating semantic ambiguity with bert. IEEE Access, 7:169434–169445. | Adot | KoGPT2 fine tuning | | |-----------------|----------------------|-----------------| | python Version | 3.6.9 | 3.6.10 | | Tensorflow Ver. | 1.15.2 | - | | Pytorch Ver. | - | 1.6.0a0+9907a3e | | epoch | 15 | 15 | | batch size | 256 | 256 | | learning rate | (1e-5) | (1e-5) | Table 2: Development environments ## A Implementation Details A.1 Development Environment We experimented on NVIDIA A100 (40GB). The Adot model was implemented and trained within the Keras and Tensorflow environments, while finetuning of KoGPT was conducted in the PyTorch framework. ## A.2 Classification Models The average **Adot** user only uses six Korean characters in an utterance, and there are typically 5 million calls per day. Owing to these characteristics, a lightweight and simple model capable of stable CPU prediction was required. Convolutional Neural network (CNN) based models are currently used in **Adot**. This domain classification model was created using Y. Kim's CNN for sentence classification (Kim, 2014). It has a model concatenated with character-based CNN and CNN models based on part-of-speech (POS) tagged morphemes reflecting the Korean features. The character-based CNN model uses the dictionary-based embedding layer as a single channel to use the content domain's dictionary information. The configuration of the fully connected layer on top of the convolutional layer is also slightly different from the model in our study. Figure 3 shows an outline of **Adot**'s domain classification model. ## A.3 Embeddings We represented a sentence data in the semantic space through two methods. ## A.3.1 Sentence Representation Using Word Embedding Word embedding is a representative method of expressing meaning in a semantic space. We developed two kinds of embeddings, character-level embedding and POS tagged morpheme-level embedding, to express sentences. A sentence is represented by concatenating it through two embeddings. Our embedding developed with skip-gram and negative sampling based on McCormick (2016) tutorial. Table 4 shows the detailed parameters of the embeddings we used. These parameters were determined empirically to maximize the performance of the **Adot**'s domain classifier. These embeddings were trained from Korean Wikipedia sentences. Korean Wikipedia is a set of documents written by many people and reflects the popular usage of the Korean language by Koreans. However, sentences on the Wikipedia are long, while the utterances coming into the **Adot** service are relatively short, colloquial, and that there are many content object names. To reflect this part, the embedding trained from the wiki were tuned with the training data of **Adot**. ## A.3.2 Sentence Representation Using Kogpt2 GPT-2 is a natural language processing model that uses machine learning algorithms to generate input sample text into text with syntactic, grammatical, and informational consistency. KoGPT-2, an open source-based GPT-2 model trained in Korean. Character Byte Pair Encoding (KBPE) was used as tokenizer. Korean Wikipedia, Modu Corpus, and the Blue House National Petition and private data like news were used as training data. When a sentence passes through the KoGPT2, the output values from the last layer were used as sentence representation. ## A.4 Training Data The training data of domain classifier of **Adot**'s was used as the training data of this experiment. Adot encompasses thirty service domains, including 'general', 'video', 'music', 'schedule', and 'weather'. A person directly constructed about 2.2 million training data according to the design of the service. The data is primarily Korean sentences. Some foreign language sentences are also included. The table5 includes an extensive description of the model and training data. The data used in this experiment is from the December 2021 version, and it's important to note that its structure and content differ from the current **Adot** service data. ## B Testset Details B.1 Designer-Annotator Testset To improve performance, we have collected utterance logs from **Adot** users and used them for accu- ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) racy evaluation. A test set was constructed using the utterances input in April 2022. Consensus Error& Ambiguity testset Among those utterances, utterances that were classified incorrectly were targeted. To prevent data modification that occurred in both directions of the classes, only utterances with the correct class ''general'' were collected. The "general" class is for device manipulation such as volume control in **Adot**. We constructed 2,300 test cases. Because these data are misclassification data collected from the **Adot** classification model, only the **Adot** classification model was applied to this experiment. Side-effect test set This set is a test set to evaluate the side effects caused by label modification. Through modification, even if it is possible to correctly classify misclassified utterances, the classification results of utterances that have been accurately analyzed should not be changed. We collected 20,000 utterances that the current classifier accurately analyzes and expect the data to be analyzed accurately even after modification. ## B.2 Scenario Ambiguity Testset We built two test sets to detect consensus errors between designers and users. | character-level embedding | morpheme-level embedding | | |-----------------------------|--------------------------------------|---------------------| | tokken | Character (Korean, English, numbers) | POS tagged morpheme | | Dictionary size | 1638 | 9391 | | embedding space | 256 | 256 | | skipgram window size | 3 | 3 | | negative sampling rate | 1 | | Table 4: Parameters of Two Kinds of Embeddings There is sentence 1 and sentence 2. Please rate how confident you are that the two sentences (should) belong to the same domain on a scale of 1-7. A domain is a unit for grouping and processing similar services. ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) $\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. ![10_image_2.png](10_image_2.png) Score Criteria 1: Completely different domains (cannot be viewed as the same domain) 7: Exactly the same domain (must be the same domain) | No. | sntence 1 | sentence 2 | 1 | 2 | |-------|-------------------------|--------------------------|-----|-----| | 1 | "Order an apple." | "Order a banana." | o | o | | 2 | "Order an americano." | "check payment method." | o | o | | 3 | "check payment method." | "Add toppings to pizza." | o | o | | o | o | | | | | : | | | | | | 911 | "Make a playlist." | "Add toppings to pizza." | o | o | | 912 | "check payment method." | "Pay for Youtube music." | o | o | | 913 | "Order a banana." | "Make a playlist." | o | o | Each sentence belongs to one of the eight domains below. general / schedule / commerce / food_delivery / video / news / radio | 3 | 4 | 5 | 6 | 7 | |-----|-----|-----|-----|-----| | o | o | o | o | o | | o | o | o | o | o | | o | o | o | o | o | | o | o | o | o | o | | o | o | o | o | o | | o | o | o | o | o | | o | o | o | o | o | ![10_image_3.png](10_image_3.png) is attached. | about 2.2M | | |-------------------------------|-------| | training data size | | | domain number | 30 | | average character size | 16.77 | | average word size | 4.50 | | average morpheme size. | 8.56 | | average named entity numbers | 1.1 | | average domain entity numbers | 1.9 | Table 5: Training Data Information of Adot 's Domain Classifier. Dec. 2021 version Correlation testset First, sentence pairs were collected to confirm the correlation between similarity and the degree to which the user thinks the classes are consistent. 913 pairs of sentences were collected from validation data for the Adot service. Among them, test sets were collected from eight domains in which domain classification frequently fails: 'commerce,' 'food,' 'general,' 'news,' 'schedule,' 'video,' 'music,' and 'radio.' Like a domain classifier, domain information was used as a label for classification. Each sentence pair is randomly collected within the verification data of 8 domains and may have the same class or a different class. Figure 4 and 5 shows example of testset. | similarity | scenario | entity | predicate | |--------------|------------|----------|-------------| | Expert | 0.720 | 0.380 | 0.104 | | User | 0.487 | 0.476 | 0.307 | | Adot-CLV | 0.357 | 0.443 | 0.249 | Ambiguity detection testset To test consensus error detection between designers and users, 29,136 sentences in which designers defined labels were collected. These sentences are specifications defined for **Adot** service scenarios and defined by the designer of **Adot** service. For this Spec, the sentences in which Consensus error may occur were indicated through the method mentioned in A. It was confirmed whether there was an actual ambiguity or consensus error for the specified sentences. ## C Expert-Ordinary User Correlation Comparison With ordinary users, similarity was significantly higher than with experts. To determine the reason behind this phenomenon, concordance is estimated at three points for each sentence pair—service area, entity properties, and predicate. In addition, the correlation between manual evaluation and concordances is estimated. The results are presented in Table 6. Although the expert group exhibits a very high correlation in the service area, the ordinary user group exhibits similar correlations in all three areas. Let us consider the pair, "Play Netflix" and "Play Spotify" Ordinary users may believe that the classes are similar because of the similarity of the predicates. In contrast, experts believe that Netflix and Spotify provide different services (video vs. music)—so there is no agreement irrespective of the predicate. The results confirm that the similarity is more similar to the judgment of ordinary users than that of experts. Consequently, similarity is deemed to be useful in detecting in designer-user consensus errors. ![12_image_0.png](12_image_0.png) 점수의 기준은 다음과 같습니다. ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) 1: 완전히 다른 도메인이다 (같은 도메인으로 볼 수 없다) 7: 완전히 같은 도메인이다 (같은 도메인임에 를림 없다) | 문장 1 | 문장 2 | B | |------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------|------| | 내일이 9월 15일이지 | 21년 1월 23일에 | | | 18번 주문 예약 캔슬해줘 | 5번째 메뉴 열판 스몰 주문해 | | | 볼 5 로 라디오 들어볼래 | 이전 회차 동영상 를자 | | | 중요하다고 했면 왕십리 오늘 일정 좀 실행해봐 | 이번주 화요일에서 금요일까지 설정된 알람 삭제 요청할래 다 | | | 화요일 목요일 오후 10시에 반복 알람해달라고 | 월 새벽 4시에 알람 반복에 이동국 | | | 알림 헬프 | 추천 어플 | | | 다음 회차 를어줘봐 | 알람 소리 를어줘 | | | 왼쪽 목록 좀 봐봐 | 타이머 버른 실행해 | | | 블루베리피자 | 블루베리 담아줘 | | | 금 주 쇼핑하자 | 치킨 슈퍼파파스 시키자 | | | 아래로 두번 이동 | 서치 | | | 노래 중에서 죽겠다 유투브에서 좀 검색해주겠니 | 노래 우율한 편지로 SETTING 부탁한다 | | | 소녀시대 우기 목소리로 알람 설정 해봐봐 | 월 새벽 4시에 알람 반복에 이동국 | | | 목록 전에 것으로 볼거야 | 지금 시간 볼 거야 | | | 베이컨체더치즈 곡물 도우 피자 | 크레딧 카드로 결제 | | | 다크 나이트 재생하렴 | 빠르게 재생하자 | | | 그럼 낮 | 방 | | | 요증 NEWS TOPIC 재생 GO GO | THE LAST WORD 재생해줘 | | | 카롤릭 평화방송 를어줘 | 가를릭 평화방송 를려줘 | | | 옥수수 KIDS 재생 고고 | 옥수수키즈 어떻게 써 | | | 다음 항목으로 이동하기 | 너 사용법률어줘 | | | 내일 여멸시에 생생가스 오야꼬동 세 개배달 시켜줄래 | 이달 마지막 주 금요일에 주문했던 거 배달 현황 확인해 | | | 대각선 오른쪽 아래쪽으로 이동해줘 | 타이머 벨소리모드 버튼 클릭 | | | 뉴스 볼륨 10퍼센트 줄이자 | 티비 볼륨 좀 조용했으면 | | | 또다른 상품 조회 | BQ 치킨 추천 상품 얼마에요 | | | 99 | 93 1 라디오로 깨워해볼래 | | | GUATEMALA CITY 시간 | 엘라스트 원준로 할게 | | | 취침예약해놓은 것 좀 취소시켜줘 | 취침예약 하려면 어떻게 해 | | | 배달 상태 체크하고 싶거든 | 배달 부탁할끼 | | | 2 2 | 다음주 월요일 며칠인지 듣고싶어 | | | 지금 이 채널 좋아요 목록에서 없애주라 | 화을에 사람들이 좋아하는 채널 켜 | | | 한국 총선때 스브스에서 한 뉴스 재생해봐 | 요즘 내가 봤먼 뉴스 를어봐줘요 | | | Figure 5: Human evaluation test set example. The evaluation was conducted in Korean; a more detailed explanation | | | is attached.
rabinovich-etal-2023-reliable
Reliable and Interpretable Drift Detection in Streams of Short Texts
https://aclanthology.org/2023.acl-industry.42
Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time. Monitoring drift helps detecting these issues and preventing their harmful consequences. Meaningful drift interpretation is a fundamental step towards effective re-training of the model. In this study we propose an end-to-end framework for reliable model-agnostic change-point detection and interpretation in large task-oriented dialog systems, proven effective in multiple customer deployments. We evaluate our approach and demonstrate its benefits with a novel variant of intent classification training dataset, simulating customer requests to a dialog system. We make the data publicly available.
# Reliable And Interpretable Drift Detection In Streams Of Short Texts Ella Rabinovich Matan Vetzler Samuel Ackerman Ateret Anaby-Tavor IBM Research {ella.rabinovich1, matan.vetzler, samuel.ackerman}@ibm.com atereta@il.ibm.com ## Abstract Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time. Monitoring drift helps detecting these issues and preventing their harmful consequences. Meaningful drift interpretation is a fundamental step towards effective retraining of the model. In this study we propose an end-to-end framework for reliable modelagnostic change-point detection and interpretation in large task-oriented dialog systems, proven effective in multiple customer deployments. We evaluate our approach and demonstrate its benefits with a novel variant of intent classification training dataset, simulating customer requests to a dialog system. We make the data publicly available. ## 1 Introduction Contemporary data centers rely heavily on machine learning services in their deployed systems. These systems are vulnerable to the data drift problem: the phenomenon where the statistical properties of the underlying independent variable change over time. As a concrete example, consider the case where the distribution of data arriving to a supervised classifier gradually diverges from that the model was trained on. Such a phenomenon introduces one of the key challenges in maintaining large models, where drift typically results in performance degradation. Manual inspection of the data is labor-intensive and error-prone, and actual drift might remain unnoticed. Automatic monitoring and detection of divergences in incoming data streams facilitates early risk mitigation introduced by drift. Goal-oriented dialog systems1 have gained much attention in both the academic and industrial communities over the past decade. The core component of a task-oriented dialog system is the NLU module: the user utterance is either transformed into a modeled intent2 with an appropriate flow of subsequent actions, or labeled as unrecognized and stored in the *unhandled pool* of out-of-scope requests. In practice, the NLU module makes use of a supervised text classifier, where data drift is triggered by "production" data (customer queries) that changes away from the distribution the classifier was trained on. Here we address the scenario of data drift detection in the context of large deployments of task-oriented dialog systems, where emergence of novel topics or deviations in the way customer introduce queries is not uncommon. Existing approaches to data drift detection are roughly categorized across two functional dimensions: (1) model-dependent vs. model-agnostic and (2) anomaly detection vs. change point detection. In the context of first dimension, 'model' refers specifically to a predictive model (e.g., classifier) receiving the text stream as inputs. A *modeldependent* method directly considers the underlying model performance to detect drift, e.g., by tracking the posterior probability estimates of a classifier. A *model-agnostic* method uses only the incoming data itself for change detection, e.g., by measuring changes in text representations, whether or not such changes ultimately would affect the performance of a classifier model trained on anchor data. Another strength of the model-agnostic approach lies in its direct access to data: once detected, the drift can be explained and interpreted, thereby potentially leading to actionable recommendations. The second dimension distinguishes between anomaly detection that identifies outliers at the single chunk level, and *change-point* detection where a window of recent samples is examined to de1Also referred to as "task-oriented" dialog systems, or "virtual assistants" (VA). 438 tect (with statistical guarantees) a point at time where the underlying data distribution undergoes a change; the latter models are robust to noise and transient changes. We propose a pipeline for modelagnostic change-point detection in task-oriented dialog systems. Figure 1 illustrates the pipeline, which is further described in detail in Section 4. User requests towards a dialog system—natural language utterances at the first point of interactionoften contain personal and sensitive information, and are subject to agreements that prevent providers from sharing this data publicly. Extending a large, diverse and publicly-available intent classification dataset (Larson et al., 2019), we build a corpus that closely resembles a dialog system request stream, further using it for evaluation of the proposed drift detection approach.3 The contribution of this work is, therefore, twofold. First, we propose and evaluate an end-to-end pipeline for model-agnostic change-point detection in task-oriented dialog systems, that has been proven effective in multiple large-scale customer deployments. Second, we create and release a extension of an intent classification training dataset that closely imitates the nature of streaming requests towards a virtual assistant. ## 2 Related Work The main approaches to drift detection in textual streams are model-dependent: they rely on the performance of the underlying classification model, where decreasing classifier confidence (or increasing error rate) are indicative of divergence in statistical properties of the data the classifier was trained on, and the newly arriving texts (Ryu et al., 2012; Sethi and Kantardzic, 2017; Ackerman et al., 2021). Model-agnostic (e.g., classifier-independent) approaches are commonly applied for novelty detection in textual streams that do not necessarily undergo classification, like news or a tweet feed. As an example, Spinosa et al. (2007) and Hayat and Hashemi (2010) use concept-based clusters to represent data distributions of temporal textual chunks, and then detect the hidden topic drifts in terms of the difference between concept-based clusters in two adjoining data chunks. A similar approach for novelty detection in textual data was also applied by Faria et al. (2013) and Li et al. (2017). Both the model-aware and model-agnostic ap-3The dataset is available at https://huggingface.co/ datasets/ibm/clinic150-sur. proaches are commonly used to detect a point of change, regardless of the subsequent trend of the introduced novelty. As such, these methodologies are best associated with "anomaly" or "outlier" detection, while in the case of a task-oriented dialog system we are interested in detecting a systematic, consistent drift trend - a potential trigger for intent classifier retraining. The only study addressing change point detection in textual data that we are aware of is Wang and Goutte (2018), who apply LDA (Blei et al., 2003) for detecting change points in document streams from twitter and news feed. While LDA can be used effectively for modeling long documents, it is practically inapplicable for short (often 2–3 word) requests. Table 1 summarizes the landscape of the prior art in the domain of drift detection in textual streams. Our work bridges the gap in the domain of *model-agnostic*, statisticallyrobust *change-point* detection for streams of short texts, while interpreting the detected drift. study OD CPD M-AW M-AG ![1_image_0.png](1_image_0.png) Ryu et al. (2012) ✓ ✓ Sethi and Kantardzic (2017) ✓ ✓ Ackerman et al. (2021) ✓ ✓ Hayat and Hashemi (2010) ✓ ✓ Faria et al. (2013) ✓ ✓ Li et al. (2017) ✓ ✓ Spinosa et al. (2007) ✓ ✓ Wang and Goutte (2018) ✓ ✓ our study (short texts) ✓ ✓ ## 3 Dataset We study the phenomenon of data drift in the context of (natural-language) user requests to a taskoriented dialog system. Novel topics, or deviations from existing topics can emerge as the result of new services introduced by a company, failures in existing service coverage, or external trends and factors. Large or immature deployments face the need to constantly monitor requests poorly-covered by the existing service for identifying points where the distribution of the input data has changed, for effective and efficient model retraining. Publicly-available datasets of real-word user requests in customer deployments are extremely scarce due to company agreements, confidentiality ![2_image_0.png](2_image_0.png) and privacy considerations. While the proposed pipeline has been evaluated on large customer deployments, here we create a novel, carefully curated dataset that reliably imitates the characteristics of user requests, and further conduct drift detection evaluation on the collected data. Intent classifier training examples are inherently designed to reliably represent user requests a VA. Naturalistic user requests, however, typically have several characteristics in which they differ from training examples: (1) the same request semantics can be conveyed in many possible ways, while training examples of the respective intent typically cover the potential diversity only to a partial extent; (2) contrary to intent training examples that only contain unique phrases, actual user requests include many duplicates (multiple customers asking the same question); and (3) customer requests in real-world systems are typically shorter (often significantly so) than classifier training examples. Using CLINC150 (Larson et al., 2019), a large a diverse 150-intent classification dataset, we generate its extended version simulating the nature of customer requests— CLINC150-SUR (simulated user requests)—by addressing the mentioned distinctions, as detailed below. A typical large customer virtual assistant size varies between few dozens to hundreds of intents, often spanning multiple domains. CLINC150 is multi-domain 150intent dataset, which makes it a suitable test-bed for our drift detection experiments. Data Augmentation for Diversity We achieve higher request diversity by applying LAMBADA (Anaby-Tavor et al., 2020), a tool for classifier training set augmentation; LAMBADA generates phrases sharing the same semantic charge as the seed classification examples provided as input. Next, we apply the Parrot paraphrasing framework, generating up to five additional phrasing variants for each data example. While the LAMBADA generates in-class semantic-preserving but lexicallydiverse examples, Parrot adheres to more conservative choices by producing slight variations of its input phrases. As an example, considering the CLINC150 "insurance" intent training example "can you tell me the name of my insurance plan?", the request "can you tell me what insurance plan I am signed up for?" was generated by LAMBADA, and "can you tell me what insurance plan I have?" was further added by Parrot. Weighted Upsampling of Duplicates Figure 2 illustrates the differences in request length: the distribution of the relative ratio of user requests of certain length (in tokens) observed in the intent classification data (left) significantly differs from that evident in a real-world large proprietary dataset of streaming customer requests (right). In particular, over 66% of actual user utterances consist of up to 5-token requests, while only 18% of the CLINC150 data exhibit similar length. Short and often non-informative requests challenge tools that rely on textual semantic similarity, hence affecting the process of drift detection. We therefore strive to imitate the naturalistic length-based request distribution in our dataset, by upsampling the augmented data to preserve similar length-based distribution as in Figure 2 (right). As a concrete example, short requests like "insurance" and "my insurance" were upsampled, mirroring their natural frequency in a real-world VA, while only a single instance of "I would like to know all of the covered benefits that are given by my health care plan" remained in the final dataset. We report the CLINC150-SUR ![3_image_0.png](3_image_0.png) statistics in Table 2. The final collection of ∼600K requests is made publicly available. | dataset | mean | total | |--------------------------------------|--------|---------| | original (Larson et al. (2019)) | 150.0 | 22.5K | | + generation (Rahamim et al. (2023)) | 448.4 | 67.3K | | + rephrasing (Parrot) | 1.65K | 250K | | + upsampling | 4K | 600K | Table 2: Statistics of the created dataset CLINC150- SUR. Mean number of requests per intent, and the total amount of requests is reported per each expansion step. ## 4 Interpretable Drift Detection We propose and evaluate a multi-step approach for reliable and interpretable change point detection in textual streams. The end-to-end pipeline of drift detection and interpretation is illustrated in Figure 1; below we describe each step in more detail. ## 4.1 Drift Point Detection Model Training The initial distribution of the dataset was learned by training an autoencoder (AE) on a seed dataset representing data distribution at the beginning of monitoring window.4 An AE is a special type of neural network that is trained to reproduce its input using the encoderdecoder architecture. Given a dense text representation (embedding) e, an AE first encodes the text into a lower-dimensional latent representation, then decodes the latent representation back to the text representation eˆ; it essentially learns to compress the data while minimizing the reconstruction error J (e, eˆ). The network's "success" at reconstructing a new example at inference time reflects the correlation of this instance to the nature of data 4We used the MLPRegressor implementation at sklearn with three hidden layers of 600, 150 and 600. MLPRegressor functions as an autoencoder when provided with identical input and output representations. the model was trained on. In the context of text processing, autoencoders have been effectively applied to the task of anomaly and novelty detection (Paula et al., 2016; Zhou and Paffenroth, 2017; Mei et al., 2018).5 Operating at the individual instance level, an autoencoder detects a pool of "outliers" from within a given data. The Universal Sentence Encoder (USE; Cer et al. 2018) was used for encoding requests into dense representations e, due to its runtime efficiency. Another intuitive approach to instance-level outlier detection employs the *perplexity* metric: the extent of surprisal of a pretrained language model by an unseen text. The approach has been studied by Freeman et al. (2021) for detecting anomalies in streams of short texts; we leave the investigation of this alternative approach for future work. Drift Candidates Detection Incoming request data is split into fixed-sized batches (in terms of the number of requests), and the model is applied on every new batch as it arrives, computing reconstruction similarity for each request in the chunk. A request's embedding e reconstruction similarity is computed as the *cosine* similarity of its original representation to the representation of its reconstructed counterpart eˆ, i.e., cosine(e, eˆ). Utterances poorly reconstructed by the anchor model are considered *outliers*: requests where *cosine*(e, eˆ)<γ, for a predefined γ, are stored in the outlier pool O. For data chunk at the time step ti, its similarity sito the anchor dataset is calculated, producing a growing sequence of numerical observations S={s1, s2, . . . , st−1, st}; at each time step t, the sequence is passed to changepoint detection module. Change-Point Detection Drift is indicated by a change-point in the distributions of the observed similarities S. That is, index ts < t is a changepoint, the true starting index of the change, if the distributions of values {s1*, . . . , s*ts−1} (before) and {sts*, . . . , s*t} (after) differ significantly. We apply the change-point model (CPM; Ross and Adams 2012, implemented in R as cpm, Ross 2015) algorithm repeatedly on the past {s1*, . . . , s*t} at each t. That is, at a given t, if the most significant candidate split point is significant enough (the CPM p-value is a fixed < α, say 0.05), we say that td = t 5Our future work includes experimenting with variational autoencoder (VAE), introducing a regularisation term into its loss function for better generalization capabilities. is the detection index; moreover, the most significant split index, tp —where tp < td - is tˆs, that is, the best guess of the actual change point ts, if it actually happened. The CPM is unique in that it correctly maintains the false positive rate at α (e.g., 0.05) even though it is applied repeatedly in sequence, testing each potential change location for each t. That is, if a detection is made at t, the probability the detection was false (there was no change point) is at most α. Furthermore, this method makes no parametric assumptions about the distributions. See Ackerman et al. (2020) for further details and discussion. ## 4.2 Drift Interpretation The predicted change point tp detected over a sequence of similarity observations S is further used as an indicator for the start point of topical novelties; all outlier utterances from the outlier pool O that occurred after the predicted change point (with time indices t ∈ [tp, td]) are utilized for semantic grouping, or clustering. In our scenario, an effective clustering procedure should have several properties. First, the number of clusters is unknown, and has to be discovered by the clustering algorithm. Second, the nature of data typically implies several large and coherent clusters, where users repeatedly introduce very similar requests, and a very long tail of unique (often noisy) requests that do not have similar counterparts. We apply the RBC clustering approach used by Rabinovich et al. (2022), that was specifically tailored for the scenario of unhandled requests in task-oriented dialog systems: the procedure does not require a predefined number of clusters, tolerating non-clusterable instances. Figure 3 illustrates a typical outcome of the clustering process; identified clusters—each representing likely instances of the same potentially novel intent—are depicted in color, while non-clusterable instances, constituting approximately half of the instances, appear in grey. ## 5 Experimental Results In this section we describe the experimental setup and results for two major evaluation phases: change-point detection and drift interpretation. ## 5.1 Drift Point Detection Results 5.1.1 Experimental Setup Drift Scenarios Introducing drift into a VA request stream for thorough evaluation is a non-trivial ![4_image_0.png](4_image_0.png) task. A realistic setup would entail simulating one or more novel (unseen) topics that gradually comprise an increasing number of requests over time, as in the case of a new feature introduced by a service provider, being gradually adopted by customers. Another plausible scenario is where novel topics are introduced by service interrupt or unexpected failure; in that case, one may expect a steep increase in atypical requests, followed by nearly plateau distribution over time. We refer to these scenarios as (a) and (b), respectively. Correctly identifying scenarios where no drift was introduced is of considerable importance as well, ensuring the system is not prone to false positives. We cover this scenario by two additional experimental setups where (c) no drift is introduced, and (d) a short-lived anomaly is introduced spanning a small number of consecutive data batches. The various drift scenarios, as reflected in data batch similarities S={s1, s2, . . . , st−1, st} to the anchor model, are depicted in Figure 4. ![4_image_1.png](4_image_1.png) Drift Detection Multiple setup decisions were used for drift detection experiments in this study. A series of 32 temporally-ordered data batches of 5K requests each was generated from shuffled request dataset CLINC150-SUR, where ∼5% randomly selected intents out of 150 were held out for drift injection. Although drift detection is likely to be more robust on large data chunks, we found consistent behavior when working with batches containing as few as a couple of hundreds of requests.6 The first two data chunks where used for training the anchor AE model, and the remaining 30 for simulating a temporal stream of requests.7 Drift was introduced by adding requests from held-out intents either gradually (a) or uniformly (b) starting at the middle of the stream, i.e., at time step t=15. The novel intents causing the drift, as well as the subset of requests spanning the 32 data chunks, were selected at random per every experiment. A single parameter that was tuned for drift detection is the outlier detection threshold γ (see Section 4.1). While typically lying within the [0.75, 1] range, the optimal value of γ varies according to the data nature. In multi-domain deployments, higher γ would typically tolerate the inherent diversity of the incoming requests; in contrast, in single-domain deployments, lower γ would impose a stricter threshold for outliers detection. We tune γ to optimize detection accuracy per deployment using a held-out portion of data; for the CLINC150- SUR used in this work, γ was set to 0.775. ## 5.1.2 Experimental Results We compare the performance of the AE-based similarity measurements to various dataset similarity metrics suggested in literature. While computing anchor-batch similarity, metrics operating at the instance level (i.e., aggregating the similarity of individual requests to the anchor model) can be used for seamless generation of the outlier pool, further used for drift interpretation. Metrics that operate at the *batch* level typically make use of measures of central tendency and dispersion, comprising a less natural (albeit adaptable) choice for our scenario. metrics from that work. IRPR (Zhao et al., 2017) is a corpus distance metric based on informationretrieval techniques focused on precision and recall. Medoid (Kaufmann and Rousseeuw, 1987) applies cosine similarity over the arithmetic mean of embeddings of two textual sets. MAUVE (Pillutla et al., 2021) estimates the gap between two texts using KL-divergence over the area under the information divergence frontiers. FID (Heusel et al., 2017) calculates the 2-Wasserstein distance on fitted continuous multivariate Gaussian over two datasets. Drift Detection Metrics In line with previous work on drift detection (Wang and Goutte, 2018; Ackerman et al., 2021) we report multiple results: detection offset denotes the number of steps between the two points (ts, td): where the drift was introduced (ts) and detected (td), i.e., |td−ts|; *detection deviation* measures the difference between the actual drift injection point (ts), and the drift point suggested by the CPM module (tp=tˆs), i.e., |tˆs−ts|; finally, *drift rate at detection* denotes the relative rate of drift requests within the entire amount of requests in the batch corresponding to the detection point td. We report these metrics for both gradual (a) and uniform (b) drift scenarios in Figure 4. An additional measurement of interest is the false negative (FN) rate, the proportion of experiments where drift was not detected over the series of 30 observations, despite drift that was injected. We address the two no-drift scenarios—(c) and (d)—in Figure 4, by reporting the rate of *false* positives (FP): the proportion of experiments where drift was erroneously detected by the CPM module using each one of the similarity metrics. Table 3 reports the results. Using AE for computing datasets similarity performs roughly on par with IRPR, and outperforms other approaches, across the board. Detection offset and deviation are higher in the gradual drift scenario (7.08 and 2.03 vs. 5.82 and 1.07, respectively) reflecting the more challenging setup of a growing drift, compared to the stable plateau drift spread starting a certain point. The relatively low drift rate at detection (1.7%–2.2%) implies that the procedure is sensitive to drift at its early stage. On the other hand, the low rate of false positive and negatives is indicative of the robustness of the detection routine. Finally, in the next paragraph, we show that our approach outperforms the model-dependent approach leveraging a classifier confidence scores. | metric | instance | false | | | | | | | | |---------------------------------------|---------------------|---------------------|-----------|-----------|---------------|-----------|-----------|---------------|-------| | detection rate | drift type: gradual | drift type: uniform | | | | | | | | | level | FP | FN | detection | detection | drift rate at | detection | detection | drift rate at | | | offset | deviation | detection | offset | deviation | detection | | | | | | AE (our approach) | ✓ | 0.04 | 0.04 | 7.08 | 2.03 | 0.017 | 5.82 | 1.07 | 0.022 | | IRPR (Zhao et al., 2017) | ✗ | 0.08 | 0.04 | 7.02 | 1.79 | 0.015 | 5.71 | 1.01 | 0.020 | | Medoid (Kaufmann and Rousseeuw, 1987) | ✗ | 0.10 | 0.21 | 9.17 | 5.08 | 0.017 | 7.23 | 1.25 | 0.020 | | MAUVE (Pillutla et al., 2021) | ✗ | 0.15 | 0.11 | 7.02 | 3.05 | 0.023 | 5.91 | 1.09 | 0.021 | | FID (Heusel et al., 2017) | ✗ | 0.09 | 0.13 | 8.22 | 4.16 | 0.026 | 6.28 | 1.01 | 0.024 | ## Model-Dependent Experiments Additional Set of experiments was conducted using intent classifier posterior estimates as indicator for data drift. Identical experimental setup was applied, where 95% of intent training set from CLINC150 (seed data) was used for training SVM classifier with training instances' embeddings. Roughly 5% of intents were randomly selected and held-out as "novel" at each experiment. The pretrained classifier was then used to classify batches of requests from CLINC150-SUR corresponding to the seed data (before injecting drift), and to the seed data extended with drift requests (starting the drift point). Mean classifier confidence was computed for every request batch yielding a series of observations, which was further inut to the change-point detection module (see Section 4.1). Drift was detected in less than 20% of the experiments, compared the 96% with the model-agnostic approach, highlighting the benefits of the direct access to data for drift detection. ## 5.2 Drift Interpretation Results Drift interpretation is a two-step process: first, outlier requests are grouped together based on their semantics, thereby, firming up dense clusters conveying the same intent; second, identified clusters are assigned with names for better consumability. The subset of outlier requests O starting from the predicted drift point tˆs is further used for identifying novel topics, indicated by dense clusters of requests that share similar semantics. The pool of outliers is not limited to novelties but also contains requests pertaining to existing ("known") intents that could not be successfully reconstructed by the anchor model, and requests that pertain to topics that were left out of the VA scope by design. We apply the RBC clustering approach by Rabinovich et al. (2022) with defaults for surfacing topical clusters, and focus on topical coverage (recall) in our evaluation. Each cluster—a group of similar outlier utterances—is assigned an intent label based on the majority of its members, and the coverage is computed as the ratio of detected intents out of injected drift intents for each individual experiment. The mean recall in 100 experiments was 0.709, meaning that on average, 70% of the injected drift intents were identified as such. Inspecting names (automatically) assigned by the algorithm to detected outlier clusters, we can identify a significant degree of overlap between those names and drift intent labels, which were presumably created by human annotators. As an example, the "exchange_rate" drift intent was identified as such and assigned the label "exchange rate" by the clustering algorithm; requests from the "order_status" drift intent were named as "order tracking", and "redeem_rewards" was surfaced as "redeem rewards points". ## 6 Conclusions We propose and evaluate a pipeline for modelagnostic change-point detection in the context of drift detection in task-oriented dialog systems NLU module. We demonstrate the benefits of the proposed approach on an expanded version of an intent classification training dataset, that closely imitates the nature of streaming requests towards a taskoriented dialog system - the dataset that we make available to the research community. We demonstrate that AE can be used for effective and efficient change-point detection, performing on par with state-of-the-art dataset similarity metrics, while operating at the instance level. Our future directions include experimenting with the (baseline) metric of language model perplexity as well as variational autoencoders for the task of drift detection in streams of short texts. Extending experimental setup to the multi-lingual setting is another direction we plan to pursue. ## Acknowledgements We are grateful to the three anonymous reviewers for their constructive feedback. We would also like to thank Orna Raz, Eitan Farchi and Haode Qi for their kind help and fruitful discussions. ## References Samuel Ackerman, Parijat Dube, Eitan Farchi, Orna Raz, and Marcel Zalmanovici. 2020. Detection of data drift and outliers affecting machine learning model performance over time. *JSM Proceedings,* Nonparametric Statistics Section, pages 144–160. Samuel Ackerman, Orna Raz, Marcel Zalmanovici, and Aviad Zlotnick. 2021. Automatically detecting data drift in machine learning classifiers. *arXiv* preprint arXiv:2111.05672. Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 7383–7390. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. *Journal of machine Learning research*, 3(Jan):993–1022. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Céspedes, Steve Yuan, Chris Tar, et al. 2018. Universal Sentence Encoder. arXiv preprint arXiv:1803.11175. Elaine R Faria, João Gama, and André CPLF Carvalho. 2013. Novelty detection algorithm for data streams multi-class problems. In *Proceedings of the 28th annual ACM symposium on applied computing*, pages 795–800. Cynthia Freeman, Ian Beaver, and Abdullah Mueen. 2021. Detecting anomalies in sequences of short text using iterative language models. In *The International FLAIRS Conference Proceedings*, volume 34. Morteza Zi Hayat and Mahmoud Reza Hashemi. 2010. A dct based approach for detecting novelty and concept drift in data streams. In *2010 International Conference of Soft Computing and Pattern Recognition*, pages 373–378. IEEE. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30. Leonard Kaufmann and Peter Rousseeuw. 1987. Clustering by means of medoids. Data Analysis based on the L1-Norm and Related Methods, pages 405–416. George Kour, Samuel Ackerman, Eitan Farchi, Orna Raz, Boaz Carmeli, and Ateret Anaby-Tavor. 2022. Measuring the measuring tools: An automatic evaluation of semantic metrics for text corpora. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Peipei Li, Lu He, Haiyan Wang, Xuegang Hu, Yuhong Zhang, Lei Li, and Xindong Wu. 2017. Learning from short text streams with topic drifts. *IEEE transactions on cybernetics*, 48(9):2697–2711. Mei Mei, Xinyu Guo, Belinda C Williams, Simona Doboli, Jared B Kenworthy, Paul B Paulus, and Ali A Minai. 2018. Using semantic clustering and autoencoders for detecting novelty in corpora of short texts. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Ebberth L Paula, Marcelo Ladeira, Rommel N Carvalho, and Thiago Marzagao. 2016. Deep learning anomaly detection as support fraud investigation in brazilian exports and anti-money laundering. In *15th IEEE international conference on machine* learning and applications (icmla), pages 954–960. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In *Advances in Neural Information Processing Systems*, volume 34, pages 4816–4828. Ella Rabinovich, Matan Vetzler, David Boaz, Vineet Kumar, Gaurav Pandey, and Ateret Anaby-Tavor. 2022. Gaining insights into unrecognized user utterances in task-oriented dialog systems. In *Conference on Empirical Methods in Natural Language* Processing (EMNLP). Adir Rahamim, Guy Uziel, Esther Goldbraich, and Ateret Anaby-Tavor. 2023. Text augmentation using dataset reconstruction for low-resource classification. In *forthcoming*. Gordon J. Ross. 2015. Parametric and nonparametric sequential change detection in r: The cpm package. Journal of Statistical Software, 66(3). Gordon J. Ross and Niall M. Adams. 2012. Two nonparametric control charts for detecting arbitrary distribution changes. *Journal of Quality Technology*, 44(2):102–116. Joung Woo Ryu, Mehmed M Kantardzic, Myung-Won Kim, and A Ra Khil. 2012. An efficient method of building an ensemble of classifiers in streaming data. In *Big Data Analytics: First International Conference, BDA 2012, New Delhi, India, December 24-26,* 2012. Proceedings 1, pages 122–133. Springer. Tegjyot Singh Sethi and Mehmed Kantardzic. 2017. On the reliable detection of concept drift from streaming unlabeled data. *Expert Systems with Applications*, 82:77–99. Eduardo J Spinosa, André Ponce de Leon F. de Carvalho, and João Gama. 2007. Olindda: A clusterbased approach for detecting novelty and concept drift in data streams. In *Proceedings of ACM symposium on applied computing*, pages 448–452. Yunli Wang and Cyril Goutte. 2018. Real-time change point detection using on-line topic models. In *Proceedings of the 27th international conference on* computational linguistics, pages 2505–2515. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Chong Zhou and Randy C Paffenroth. 2017. Anomaly detection with robust deep autoencoders. In *Proceedings of the 23rd SIGKDD international conference on knowledge discovery and data mining*. Association for Computing Machinery.
hueser-etal-2023-sharing
Sharing Encoder Representations across Languages, Domains and Tasks in Large-Scale Spoken Language Understanding
https://aclanthology.org/2023.acl-industry.43
Leveraging representations from pre-trained transformer-based encoders achieves state-of-the-art performance on numerous NLP tasks. Larger encoders can improve accuracy for spoken language understanding (SLU) but are challenging to use given the inference latency constraints of online systems (especially on CPU machines).We evaluate using a larger 170M parameter BERT encoder that shares representations across languages, domains and tasks for SLU compared to using smaller 17M parameter BERT encoders with language-, domain- and task-decoupled finetuning.Running inference with a larger shared encoder on GPU is latency neutral and reduces infrastructure cost compared to running inference for decoupled smaller encoders on CPU machines. The larger shared encoder reduces semantic error rates by 4.62{\%} for test sets representing user requests to voice-controlled devices and 5.79{\%} on the tail of the test sets on average across four languages.
# Sharing Encoder Representations Across Languages, Domains And Tasks In Large-Scale Spoken Language Understanding Jonathan Hueser1,∗ , Judith Gaspers1, Thomas Gueudre1, Chandana Satya Prakash1**, Jin Cao**2, Daniil Sorokin1, Quynh Do1, Nicolas Anastassacos1, Tobias Falke1**, Turan Gojayev**1, Mariusz Momotko1, Denis Romasanta Rodriguez1, Austin Doolittle1**, Kartik Balasubramaniam**1, Wael Hamza1, Fabian Triefenbach1**, Patrick Lehnen**1 1 Amazon Alexa AI 2 Apple ## Abstract 1 Introduction Spoken Language Understanding (SLU) plays an essential role in voice-controlled devices such as Amazon Alexa, Apple Siri and Google Assistant. Two commonly studied SLU subtasks are intent classification (IC) and slot filling (SF). While IC classifies an utterance into a set of pre-defined intents, SF aims to extract relevant slot information. For example, given an utterance "play madonna" IC should determine *PlayMusic* as the intent and SF should detect "madonna" as *Artist*. The two subtasks are often modeled jointly (Do et al., 2020; Chen et al., 2019; Guo et al., 2014). In large-scale SLU systems intents can be split into separate domains allowing the same intents to exist in different domains and for domain-specific teams to work independently of each other. Consequently, multidomain SLU models have been developed that ad- ∗Corresponding author: hueserjh@amazon.de ditionally address the task of domain classification (DC). For example, relating back to the previous example DC should detect *Music* as the domain. In this paper, we focus on a multi-domain system with domain-specific IC+SF models due to its advantages for large-scale SLU, such as the support for independent development across domains and the option of updating and deploying only certain domain models instead of the whole system. Neural network models for IC+SF and DC tasks typically leverage language representations from finetuned language modeling encoders as features. Bidirectional Encoder Representations from Transformers (BERT) pre-trained on large corpora achieve state-of-the-art performance in SLU (Devlin et al., 2019; FitzGerald et al., 2022a; Chen et al., 2019). Larger encoders can give better performance on down-stream tasks after being finetuned (Wang et al., 2020). However, increasing the number of encoder parameters for large-scale online SLU systems is challenging due to increased inference latency and infrastructure cost constraints. In this paper we present a method for bringing the accuracy benefits of larger encoders to the users of large-scale SLU systems. We propose to share representations from a large frozen multilingual encoder as features for all of the DC and domain-decoupled IC+SF task heads across multiple languages. We compare a 170M parameter (not counting embeddings) shared encoder for four languages (German, Italian, Spanish, French) to a set of language-, domain- and task-decoupled 17M parameter encoders. For reference, the common BERT-base and BERT-large models are 87M and 306M parameters not counting the embeddings (110M and 340M parameters with embeddings). As a reference for infrastructure cost we consider inference of the language-, domain- and task-decoupled 17M parameter encoders on CPU machines as GPU inference does not meet the infrastructure cost constraints of the specific industry Leveraging representations from pre-trained transformer-based encoders achieves state-ofthe-art performance on numerous NLP tasks. Larger encoders can improve accuracy for spoken language understanding (SLU) but are challenging to use given the inference latency constraints of online systems (especially on CPU machines). We evaluate using a larger 170M parameter BERT encoder that shares representations across languages, domains and tasks for SLU compared to using smaller 17M parameter BERT encoders with language-, domainand task-decoupled finetuning. Running inference with a larger shared encoder on GPU is latency neutral and reduces infrastructure cost compared to running inference for decoupled smaller encoders on CPU machines. The larger shared encoder reduces semantic error rates by 4.62% for test sets representing user requests to voice-controlled devices and 5.79% on the tail of the test sets on average across four languages. SLU system that our baseline is based on. If the 170M parameter shared encoder inference is run on GPU and representations are cached then the inference latency is neutral compared to running the 17M parameter encoder inference on CPU. The infrastructure cost savings from not having to run a 17M parameter encoder inference for every domain and language on CPU pay for the infrastructure cost of the required GPU instances. In fact, the shared encoder architecture obtains infrastructure cost savings in practice. To summarize, this paper contains the following contributions. We present a novel shared encoder SLU architecture that enables the use of a larger encoder to improve accuracy while staying inference latency and infrastructure cost neutral. We run large-scale experiments with both internal data of an industry SLU system (23 domains in four languages) and the public MASSIVE dataset (18 domains in four languages) (FitzGerald et al., 2022b). A 170M parameter shared encoder compared to 17M parameter decoupled encoders reduces semantic error rates by 4.62% for test sets representing user requests to voice-controlled devices and 5.79% on the tail of the test sets on average across four languages. We introduce a light-weight encoder that is trained together with the language- and domain-decoupled task heads to learn representations for training data unseen by the frozen shared encoder. We conduct empirical analyses that study the shared encoder setup in the context of feature expansion and distribution drift across a real-world SLU system release cycle. ## 2 Related Work Sharing language encoder representations across tasks via multi-task learning has been a prominent research thread in recent years that has resulted in a vast literature. For example, Cer et al. (2018) evaluate a transformer encoder (Vaswani et al., 2017) as a universal sentence encoder trained across multiple natural language processing tasks without prior self-supervised pre-training. Liu et al. (2019) show that multi-task learning can be leveraged on top of self-supervised pre-training for BERT encoders to improve performance across multiple natural language understanding tasks. The multi-domain, multi-task approach to joint domain classification, intent classification and slot filling that we employ in our shared encoder pre-finetuning was already explored by Hakkani-Tür et al. (2016). Using a "text-to-text" approach large language models (LLMs) have been demonstrated to be multi-task learners with cross-task generalization (Wang et al., 2022; Chung et al., 2022; Sanh et al., 2021) and frozen LLMs can also be adapted to different tasks via prompt tuning approaches (Lester et al., 2021). Multilingual encoder representations for spoken language understanding are often motivated by cross-lingual transfer (Do and Gaspers, 2019; Xu et al., 2020). Zhang et al. (2021) evaluate a multi-head decoding architecture with a multilingual encoder and language-specific task heads for intent classification and slot labeling that is similar to our proposed shared encoder architecture. The impact of scaling the parameter count in large language models and pre-trained models in general has been explored extensively by Kaplan et al. (2020); Brown et al. (2020); Radford et al.; Abnar et al. (2021). Distillation can be used to close some of the performance gap between smaller and larger language encoders while meeting inference latency constraints (Jiao et al., 2019; Wang et al., 2020; Soltan et al., 2021). But for example FitzGerald et al. (2022a) show that using larger distilled encoders reduces the semantic error rates for spoken language understanding which motivates our exploration of a shared encoder architecture to enable encoder scaling. ## 3 Methodology Our goal is to introduce a larger BERT encoder to increase accuracy for an SLU system with language, domain and task-decoupled IC+SF and DC models without regressions in inference latency and infrastructure cost. Section 3.1 provides context on practical challenges that need to be addressed by a real-world SLU system and that we consider for our proposed model architecture. We then describe how encoder representations are shared across languages, domains and tasks and explain our shared encoder model architecture in Section 3.2. In Section 3.3 we describe the impact on training, inference latency and infrastructure cost. ## 3.1 Challenges In real-world SLU systems the data distribution keeps changing and new features such as intents or slots may be added over time. A common approach for feature expansion is via synthetic datasets which are then combined with the exist- ![2_image_0.png](2_image_0.png) ing training data and subsequently used for model training. To support feature expansion and address distribution drift over time it is critical that representations available to the SLU task heads for DC and IC+SF are distinct enough to be able to learn the new data. ## 3.2 Model Architecture Decoupled Baseline Our baseline is a multidomain system with per-language multi-class domain classifiers and per-domain joint IC+SF models. Figure 1a illustrates that decoupled encoders are finetuned per-language, per-domain and pertask. The decoupling of languages and domains comes with the benefit of allowing teams to work on features for different domains and languages in parallel and we aim to keep this benefit. To meet latency constraints for CPU inference the decoupled baseline uses smaller BERT encoders with 17M parameters (not counting embeddings), 4 layers, 768 units, 1200 hidden units, and 12 attention heads (Soltan et al., 2021). Before finetuning, the 17M parameter BERT encoder is distilled from the 2.3B parameter Stage2 Alexa teacher model BERT encoder using a generic language modeling objective (FitzGerald et al., 2022a). 1 of 1 1/20/23, 2:53 PM Shared Encoder As an alternative to smaller decoupled encoders we evaluate sharing encoder representations of a larger encoder across languages, domains and tasks while keeping the benefit of decoupled languages and domains for the task heads (see Figure 1b). For the shared encoder we use a larger BERT encoder with 170M parameters (not counting embeddings), 16 layers, 1024 units, 3072 hidden units, and 16 attention heads. To stay inference latency neutral compared to the smaller decoupled encoders on CPU machines we perform shared encoder inference on a separate GPU machine. As illustrated in Figure 2 for the example of IC+SF the shared encoder representations are communicated to CPU machines running ![2_image_1.png](2_image_1.png) 1 of 1 1/20/23, 2:53 PM decoupled task head inference. To address distribution drift and the feature expansion, the decoupled components are extended with a tiny one-layer BERT encoder that we call light encoder. If training data for a new feature is added for a feature release without training a new shared encoder then the shared encoder will not be familiar with the new feature and its representations may not be sufficient for task heads to learn the new feature. The light encoder is trained from scratch to learn useful representations for examples that the shared encoder was not trained on. The representation from the light encoder is concatenated to the representation from the shared encoder before being passed to the task heads. The SF head is a sequence labeling model that consumes the token level representations. The IC head is a classification model that consumes the se1 of 1 11/2/22, 11:40 AM quence level representation. We use max-pooling to obtain the sequence level representation from the token level representation sequence produced by concatenating the shared and light encoder representations. In Figure 2 the max denotes maxpooling from the stack below that denotes the concatenated representation sequences. ## 3.3 Training And Inference To finetune the shared encoder for SLU we introduce an additional pre-finetuning step that uses a multi-task DC+IC+SF objective on data from all domains and languages combined. Before prefinetuning, the larger BERT encoder is distilled from the 2.3B parameter Stage2 Alexa teacher model BERT encoder using a generic language modeling objective (FitzGerald et al., 2022a). During training of the task heads and light encoder the shared encoder stays frozen to enable decoupling. Caching the frozen encoder outputs for training and validation examples can significantly reduce training time (Liu et al., 2021). Training the task heads and light encoder on top of a cached frozen shared encoder is faster than finetuning the decoupled smaller encoders in the baseline. We provide more details about reducing training time through caching in Appendix A.3. During online inference a cache for the encoder representations can also be used to cover a large percentage of the user traffic distribution and reduce inference latency at the corresponding percentile. For the tail of the traffic distribution that is not captured in the cached percentile GPU inference is used for the shared encoder. In practice, colocated GPU inference with a cache for the shared encoder and CPU inference for the decoupled light encoders and task heads reduces the infrastructure cost compared to the decoupled baseline. ## 4 Results 4.1 Experimental Setup Dataset We report results on both an internal dataset that is representative of user requests to voice-controlled devices and on the public MASSIVE dataset (FitzGerald et al., 2022b). For the internal dataset utterances were de-identified and annotated with intent, slot and domain labels. The data spans four languages and 23 domains per language. The number of training, validation and test utterances is on the order of several million, several hundred thousand and at least several thousand, respectively. The *full* test sets are representative of the whole user requests distribution. The *tail* test sets are representative of the tail of the user requests distribution (low frequency utterances). The tail test sets were generated from the full test sets by filtering out utterances with a frequency greater than one. *Feature* test sets are representative of individual new features introduced in a release of the SLU system where for simplicity each new feature introduces a new intent. The public MASSIVE dataset contains 1M professionally labeled virtual assistant utterances for 51 languages, 18 domains, 60 intents, and 55 slots out of which we report overall and domain-wise results on four languages. Metric For evaluation we report the relative Semantic Error Reduction Percentage (SemERR%) compared to the decoupled baseline model where higher is better. Semantic error rate (SemER) measures intent classification and slot filling jointly and is defined as $$\mathrm{SemER}={\frac{\#(\mathrm{slot+intent\;errors})}{\#\mathrm{slots\;in\;reference+1}}}\;.\qquad(1)$$ If the domain prediction is incorrect then all slot and intent predictions will be incorrect except for cases where there is intent or slot overlap between domains. SemERR% is computed as SemERR% = 1 − SemERsha/SemERdec where SemERdec is the semantic error rate of the decoupled baseline model and SemERsha is the semantic error rate of a shared encoder model. ## 4.2 Main Results In Table 1 we report the performance of the shared encoder architecture as described in Section 3.2 on the full and tail tests sets of the following four languages: German (DE), Italian (IT), Spanish (ES) and French (FR). The shared encoder architecture delivers consistent semantic error rate reductions across all languages. The error rate reductions on the tail test sets are larger than those on the full test sets which demonstrates the larger encoders ability to better generalize to harder user requests. In fact, SLU systems can often cover the head of the utterance distribution through deterministic rules making a performance improvement on the tail test set more relevant for neural network models. In Table 2 we report the absolute semantic error rate performance of the decoupled and shared encoder architectures trained and evaluated on the same four languages in the public MASSIVE | Test set | DE | IT | ES | FR | Avg. | |------------|------|------|------|------|--------| | full | 4.75 | 3.38 | 5.77 | 4.59 | 4.62 | | tail | 5.98 | 4.85 | 6.43 | 5.92 | 5.79 | | Language | dec | sha | SemERR% (↑) | |------------|-------|-------|---------------| | DE | 22.85 | 18.00 | 21.24 | | IT | 23.42 | 20.07 | 14.31 | | ES | 27.57 | 21.34 | 22.61 | | FR | 25.33 | 19.40 | 23.43 | dataset. The shared encoder architecture again delivers consistent semantic error rate reductions across all languages. ## 4.3 Analysis In this section we conduct ablation studies to better understand the role of the light encoder, taskspecific pre-finetuning and shared encoder size. We also explore feature expansion and the impact of encoder age, i.e. the time difference between shared encoder pre-finetuning and task head finetuning which can cause distribution drift in the training data. The experiments in this section use the German training data for light encoder and task head finetuning while the encoder pre-finetuning is always multi-lingual. Ablations In Table 3 we report the performance of the shared encoder with light encoder (w/ LE), without light encoder (w/o LE), without prefinetuning (w/o PFT) and using a 17M parameter encoder (17M params) instead of the 170M parameter encoder on the German full and tail test sets. Removing the light encoder only has a slight impact on performance on the full and tail test sets. For this evaluation the shared encoder has already seen the same training data during pre-finetuning that is used for the light encoder and task head finetuning so we do not expect the light encoder to produce independent representations. The light encoder is motivated by the distribution drift and feature expansion that we analyze later in this section. Since the light encoder ablation does not control for model capacity the improved performance | Model | full | tail | |------------|--------|--------| | w/ LE | 4.75 | 5.98 | | w/o LE | 4.14 | 5.43 | | w/o PFT | -2.73 | -2.66 | | 17M params | -2.12 | -2.18 | from including the light encoder may simply be due to an increased number of parameters in the domain-decoupled model component. Removing the pre-finetuning step means that the encoder representations are not task-specific for SLU which causes a performance regression on the full and tail test sets. Without pre-finetuning, the shared encoder is only pre-trained and distilled on generic masked language modeling albeit on both public data and SLU utterances (see FitzGerald et al. (2022a) for details about Stage2 distillation). Using a 17M parameter encoder instead of the 170M parameter encoder means that the encoder does not have enough capacity to learn good representations for all domains in all four languages which causes a performance regression on the full and tail test sets. The smaller 17M parameter encoder with pre-finetuning step only slightly outperforms the larger 170M parameter encoder without pre-finetuning step. For the 170M parameter encoder without pre-finetuning the performance regression on the tail test set is not as large as on the full test set while for the 17M parameter encoder the opposite is the case. A possible reason for this difference is that generic representations from the larger encoder without pre-finetuning can help generalization while the smaller encoder with pre-finetuning has to specialize on the seen SLU training data more and may not generalize as well. Encoder Age To leverage caching and reduce costs, a frozen shared encoder should ideally remain deployed for several months without any updates. We pre-finetune encoders on training data of different age relative to the training data used for light encoder and task head finetuning to investigate the impact of keeping the same encoder deployed for longer time periods. In Table 4 we report the performance of the shared encoder prefinetuned on zero (0mo), three (3mo) and six (6mo) months old training data (on the same zero months old test data in all setting). We report results for shared encoder architectures with (w/ LE) and with- | Model | Test set | 0mo | 3mo | 6mo | |---------|------------|-------|-------|-------| | w/ LE | full | 4.75 | 4.44 | 4.34 | | tail | 5.98 | 5.83 | 5.63 | | | w/o LE | full | 4.14 | 4.14 | 3.84 | | tail | 5.43 | 5.63 | 5.23 | | out (w/o LE) light encoder to see the whether the light encoder mitigates the negative effect from distributional mismatches. With the light encoder there is a small but consistent and monotone decline of performance improvement with encoder age for both the full and tail test sets. Without the light encoder the performance decline with encoder age is not as consistent. For example, the three months old encoder is on-par with the zero months old encoder on the full and even better on the tail test set. Such effects can be caused by seasonal changes in the training data that may not be represented in the test data. The light encoder consistently improves the performance for all encoder ages on both the full and tail test sets and, hence, seems to be able to fill in some of the gaps of the shared encoder. Feature Expansion For our feature expansion evaluation we consider the case of adding three completely new intents. In Table 5 we report the performance of the shared encoder for the feature expansion on both a worst-case and a best-case scenario in relation to feature data availability during pre-finetuning. The reported scores are the unweighted average SemERR% of three separate feature test sets for the three new feature intents. For the best-case scenario (PFT w/ feat) the encoder pre-finetuning training data is the same as that for the finetuning step and includes the three new feature intents. For the worst-case scenario (PFT w/o feat) the encoder is pre-finetuned on training data excluding all three new feature intents. The new feature intents are added back into the training data for the light encoder and task head finetuning step. We report results for shared encoder architectures with (w/ LE) and without (w/o LE) light encoder to see whether the light encoder mitigates the negative effect of not having seen the new features during encoder pre-finetuning. For the worst-case scenario we also test upsampling (+ upsamp) one of the features by doubling its training data during the finetuning step in order to better enable the light encoder to learn this feature. We upsample data for the feature with the smallest amount of training data for the shared encoder model but not the baseline. For the best-case scenario of a shared encoder with all feature data during pre-finetuing there is a performance improvement on the feature test sets both with and without light encoder meaning a larger shared encoder can help learn new features better. However, without feature data during prefinetuning and without light encoder there is a large regression on the feature test sets meaning that the shared encoder representations are not conducive to learning the new feature quickly in the worst-case scenario. Both the light encoder and upsampling the smallest feature individually help reduce the regression but do not remove it completely. When combining the light encoder with the upsampling a performance improvement on the new features is obtained even for the worst-case scenario of not having seen the new features during shared encoder pre-finetuning. | Model | PFT w/o feat | PFT w/ feat | |-----------------|----------------|---------------| | w/ LE | -12.93 | 3.58 | | w/o LE | -66.68 | 1.16 | | w/ LE + upsamp | 5.18 | | | w/o LE + upsamp | -16.40 | | ## 5 Conclusions We present a novel shared encoder architecture that enables the use of larger encoders in a realworld SLU system while staying inference latency and infrastructure cost neutral. By sharing representations from a larger encoder across languages, domains and tasks the semantic error rates of the SLU system can be reduced consistently across languages for test sets representing both the full user request distribution and its tail. Our empirical analyses reveal that a light-weight encoder can be used in combination with the shared encoder architecture to avoid retraining the frozen shared encoder for every new feature release. ## Limitations In this paper we compare a shared encoder architecture for SLU to a baseline architecture that was chosen based on the specific latency and cost constraints of an industry SLU system. Since encoder model sizes were chosen based on specific constraints the results may not be directly comparable to model sizes more commonly used in the literature such as BERT-large and BERT-base. We expect the general benefit and order of magnitude of accuracy improvements shown in our evaluations to transfer to comparable setups with different parameters. The primary focus of this paper is on accuracy improvements and addressing challenges of realworld SLU systems such as distribution drift and feature expansion. We do not elaborate on the details of the computational cost and inference aspects. A detailed analysis of compute cost and benchmarks of CPU and GPU inference would better highlight the infrastructure cost benefits of a shared encoder architecture for SLU. Regarding the multi-lingual aspect of the encoder we only tested a single grouping of similar European languages (German, French, Italian and Spanish). A more extensive analysis of different language groups would demonstrate that similar trade-offs seen in other works on multi-lingual language models also apply for the shared encoder architecture. ## Ethics Statement The shared encoder architecture proposed in this paper significantly reduces compute infrastructure cost of large-scale SLU systems in practice. A large absolute compute infrastructure cost reduction implies a positive environmental impact due to less power consumption. ## References Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, and Hanie Sedghi. 2021. Exploring the limits of large scale pre-training. *arXiv preprint arXiv:2110.02095*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In *Proceedings of the 2018 conference on empirical methods in natural language processing: system* demonstrations, pages 169–174. Q. Chen, Z. Zhuo, and W. Wang. 2019. Bert for joint intent classification and slot filling. *arXiv:1902.10909*. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. *Advances in* neural information processing systems, 32. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota. Association for Computational Linguistics. Quynh Do, Judith Gaspers, Tobias Roeding, and Melanie Bradford. 2020. To what degree can language borders be blurred in BERT-based multilingual spoken language understanding? In *Proceedings of* the 28th International Conference on Computational Linguistics, Barcelona, Spain (Online). International Committee on Computational Linguistics. Quynh Ngoc Thi Do and Judith Gaspers. 2019. Crosslingual transfer learning for spoken language understanding. In *ICASSP 2019-2019 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5956–5960. IEEE. Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, et al. 2022a. Alexa teacher model: Pretraining and distilling multibillion-parameter encoders for natural language understanding systems. In *Proceedings of the 28th* ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2893–2902. Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, et al. 2022b. Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages. *arXiv preprint* arXiv:2204.08582. Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 554–559. IEEE. Dilek Hakkani-Tür, Gökhan Tür, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In *Interspeech*, pages 715–719. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. *arXiv preprint arXiv:1909.10351*. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. *arXiv* preprint arXiv:2001.08361. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496. Yuhan Liu, Saurabh Agarwal, and Shivaram Venkataraman. 2021. Autofreeze: Automatically freezing model blocks to accelerate fine-tuning. *arXiv* preprint arXiv:2102.01386. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207. Saleh Soltan, Haidar Khan, and Wael Hamza. 2021. Limitations of knowledge distillation for zero-shot transfer learning. In *Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing*, pages 22–31. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. *Advances in Neural Information Processing Systems*, 33:5776–5788. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, et al. 2022. Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5085–5109. Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for crosslingual nlu. In *Proceedings of the 2020 Conference* on Empirical Methods in Natural Language Processing (EMNLP), pages 5052–5063. Daniel Yue Zhang, Jonathan Hueser, Yao Li, and Sarah Campbell. 2021. Language-agnostic and languageaware multilingual natural language understanding for large-scale intelligent voice assistant application. In 2021 IEEE International Conference on Big Data (Big Data), pages 1523–1532. IEEE. ## A Appendix A.1 Implementation Details Model Architecture The light encoders are onelayer BERT encoders with 320 units, 1200 hidden unites, and 16 attention heads. The task heads are one-layer feed-forward networks with hidden dimension 256 and dropout 0.3 for DC, two-layer feed-forward networks with hidden dimension 256 and dropout 0.5 for IC and two-layer feed-forward networks with hidden dimension 256 and dropout 0.2 for SF, and the SF head uses a CRF layer. Training The shared encoder pre-finetuning uses a multi-task DC, IC and SF training loss with equal weights. Pre-finetuning uses Adam with Noam learning rate scheduler (Vaswani et al., 2017, Section 5.3), a learning rate multiplier of 0.4 and a mini-batch size of 128. During pre-finetuning the task heads are a two-layer feed-forward network with hidden dimension 256 with dropout 0.3 for DC, 0.5 for IC and 0.2 for SF, and the SF head does not use a CRF layer. To mix the training data from different languages during pre-finetuning we use a temperature-based rebalancing approach with language weights as given in Conneau and Lample (2019, Section 3.1) with α = 0.5, i.e. with language weights qi = p α i /Pj p α j with pi = ni/Pk nk where niis the number of utterances of language i. The decoupled finetuning trains a single multiclass DC model and uses a per-domain multi-task IC and SF training loss with equal weights for the domain-specific joint intent/slot labelling model. Finetuning uses Adam with Noam learning rate scheduler (Vaswani et al., 2017, Section 5.3), learning rate multiplier of 0.1 for DC, 0.5 for IC+SF and a mini-batch size of 256 for the decoupled encoders. The shared encoder models use a learning rate of 0.5 across tasks and a mini-batch size of 256 for the light encoder and task head training. The baseline model with the decoupled encoder setup uses frozen embeddings and gradual unfreezing to a learning rate multiplier of 1.0 for the encoder weights for DC and a learning rate multiplier of 0.01 for the embeddings and gradual unfreezing to a learning rate multiplier of 0.1 for the encoder weights for IC+SF. The encoder dropout is set to 0.1 for the decoupled baseline. The shared encoder model is trained with frozen encoder and encoder dropout is disabled. ## A.2 Domain-Wise Results In Table 6 we report domain-wise absolute semantic error rate performance of the decoupled and shared encoder architectures trained and evaluated on German (DE), Italian (IT), Spanish (ES) and French (FR) in the public MASSIVE dataset. The shared encoder architecture outperforms the decoupled baseline on 67 out of 72 domain/language pairs. ## A.3 Training Cost Reduction Building an SLU model on top of a frozen shared encoder gives the opportunity to optimize training cost and latency. Given that the shared encoder is frozen and not updated over the course of training and neural network models are trained over a fixed data set for multiple epochs, the shared encoder is redundantly engaged during the forward pass of each epoch. We eliminate latency overhead and improve training time by caching the output of the shared encoder prior to training downstream SLU components like the light encoder and task heads. By storing the encoder representations of the training and validation data sets, we are engaging the shared encoder only once and subsequent training, forward/backward pass and parameter update, is limited to the light encoder and task heads. Figure 3 on the right shows the steps of training SLU models with a shared encoder cache. In the first step, we run inference on the shared encoder and store the encoder representations on disk. In the following step, we train the language- and domain-decoupled task heads with cached encoder representations as inputs. | Domain | DE | IT | ES | FR | | | | | | | | | |-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | dec | sha | ∆ | dec | sha | ∆ | dec | sha | ∆ | dec | sha | ∆ | | | Alarm | 18.99 | 11.73 | 38.23 | 18.89 | 12.78 | 32.35 | 23.6 | 15.73 | 33.35 | 16 | 17.14 | -7.13 | | Audio | 43.24 | 17.57 | 59.37 | 35.14 | 14.86 | 57.71 | 22.97 | 17.57 | 23.51 | 33.78 | 18.92 | 43.99 | | Calendar | 23.08 | 21.57 | 6.54 | 27.94 | 25.71 | 7.98 | 26.84 | 24.62 | 8.27 | 26.79 | 24.87 | 7.17 | | Cooking | 27.27 | 22.38 | 17.93 | 27.97 | 25.17 | 10.01 | 25.17 | 20.98 | 16.65 | 28.47 | 25.69 | 9.76 | | Datetime | 18.09 | 17.59 | 2.76 | 22.73 | 18.69 | 17.77 | 24.37 | 14.72 | 39.6 | 20.6 | 13.57 | 34.13 | | Email | 17.32 | 13.81 | 20.27 | 18.31 | 14.81 | 19.12 | 25.63 | 17.02 | 33.59 | 18.14 | 13.4 | 26.13 | | General | 18.22 | 14.13 | 22.45 | 18.59 | 15.99 | 13.99 | 20.82 | 15.99 | 23.2 | 18.66 | 13.81 | 25.99 | | Iot | 17.3 | 14.05 | 18.79 | 20.54 | 17.3 | 15.77 | 18.75 | 16.85 | 10.13 | 16.22 | 15.68 | 3.33 | | Lists | 19.72 | 15.49 | 21.45 | 23 | 22.07 | 4.04 | 24.06 | 26.42 | -9.81 | 19.91 | 18.01 | 9.54 | | Music | 29.7 | 14.85 | 50 | 23.76 | 18.81 | 20.83 | 30.69 | 19.8 | 35.48 | 24.75 | 16.83 | 32 | | News | 24.89 | 22.36 | 10.16 | 29.11 | 26.16 | 10.13 | 35.02 | 27.85 | 20.47 | 29.87 | 24.68 | 17.38 | | Play | 34.55 | 32.66 | 5.47 | 34.91 | 29.51 | 15.47 | 40 | 38.34 | 4.15 | 35.63 | 35.34 | 0.81 | | QA | 19.07 | 13.18 | 30.89 | 21.3 | 16.02 | 24.79 | 20.12 | 16.02 | 20.38 | 20.45 | 18.18 | 11.1 | | Recomm. | 32.82 | 28.21 | 14.05 | 33.33 | 31.28 | 6.15 | 46.88 | 29.69 | 36.67 | 47.18 | 30.26 | 35.86 | | Social | 17.62 | 11.89 | 32.52 | 11.45 | 11.89 | -3.84 | 18.06 | 13.22 | 26.8 | 17.62 | 11.01 | 37.51 | | Takeaway | 32.8 | 31.2 | 4.88 | 28.57 | 29.37 | -2.8 | 28.57 | 30.95 | -8.33 | 26.98 | 25.4 | 5.86 | | Transport | 18.3 | 15.36 | 16.07 | 21.24 | 16.99 | 20.01 | 21.85 | 17.55 | 19.68 | 18.95 | 14.71 | 22.37 | | Weather | 15.54 | 14.41 | 7.27 | 18.93 | 15.54 | 17.91 | 22.82 | 15.21 | 33.35 | 19.89 | 18.75 | 5.73 | | Overall | 22.85 | 18 | 21.24 | 23.42 | 20.07 | 14.31 | 27.57 | 21.34 | 22.61 | 25.33 | 19.4 | 23.43 | SUES_cached_training.drawio https://drawio.corp.amazon.com/ Table 6: SemER for the decoupled (dec) and shared (sha) encoder architectures and corresponding SemERR% (↑) denoted by ∆ evaluated on the MASSIVE dataset for four languages. ![9_image_0.png](9_image_0.png)
tabatabaei-etal-2023-annotating
Annotating Research Infrastructure in Scientific Papers: An {NLP}-driven Approach
https://aclanthology.org/2023.acl-industry.44
In this work, we present a natural language processing (NLP) pipeline for the identification, extraction and linking of Research Infrastructure (RI) used in scientific publications. Links between scientific equipment and publications where the equipment was used can support multiple use cases, such as evaluating the impact of RI investment, and supporting Open Science and research reproducibility. These links can also be used to establish a profile of the RI portfolio of each institution and associate each equipment with scientific output. The system we are describing here is already in production, and has been used to address real business use cases, some of which we discuss in this paper. The computational pipeline at the heart of the system comprises both supervised and unsupervised modules to detect the usage of research equipment by processing the full text of the articles. Additionally, we have created a knowledge graph of RI, which is utilized to annotate the articles with metadata. Finally, examples of the business value of the insights made possible by this NLP pipeline are illustrated.
# Annotating Research Infrastructure In Scientific Papers: An Nlp-Driven Approach Seyed Amin Tabatabaei, Georgios Cheirmpos, Marius Doornenbal, Alberto Zigoni, Veronique Moore, and **Georgios Tsatsaronis** Elsevier s.tabatabaei@elsevier.com ## Abstract In this work, we present a natural language processing (NLP) pipeline for the identification, extraction and linking of Research Infrastructure (RI) used in scientific publications. Links between scientific equipment and publications where the equipment was used can support multiple use cases, such as evaluating the impact of RI investment, and supporting Open Science and research reproducibility. These links can also be used to establish a profile of the RI portfolio of each institution and associate each equipment with scientific output. The system we are describing here is already in production, and has been used to address real business use cases, some of which we discuss in this paper. The computational pipeline at the heart of the system comprises both supervised and unsupervised modules to detect the usage of research equipment by processing the full text of the articles. Additionally, we have created a knowledge graph of RI, which is utilized to annotate the articles with metadata. Finally, examples of the business value of the insights made possible by this NLP pipeline are illustrated. ## 1 Introduction According to the definition adopted by the European Commission (European Commission et al., 2012), Research Infrastructure (RI) refers to "facilities, resources and related services that are used by the scientific community to conduct top-level research in their respective fields and foster innovation". A similar concept, which is more commonly used in the United States, is *"Research Core"* (Bai and Schonfeld, 2021).1 RI plays a crucial role in conducting high-quality research, with significant financial resources invested every year; for example, European countries have invested over 10 billion EUR every year in the period 2014 − 2020 (European Commission et al., 2019), while UK 1We will use *"RI"* throughout the text to refer to both concepts. Research and Innovation (UKRI), the main public research funding agency in the United Kingdom, has announced in its *Corporate Plan* for the years 2022 − 2025 to increase the RI investments by at least £200 million every year, to reach over £1.1 billion in 2024 to 2025 (UKRI, 2022). It is, therefore, extremely important for all stakeholders in the research landscape to assess the impact of such investments. Various frameworks for impact evaluation have been proposed in the past (OECD, 2019; Griniece et al., 2020) and they all include scientific outputs, particularly publications in peer-reviewed journals, as an important facet of impact. There are several challenges in tracking research outputs enabled by RI, such as the lack of a standard approach to recognize contributions of facility managers and staff scientists (Bai and Schonfeld, 2021), or the fact that sometimes it is not even considered appropriate to include them as co-authors (Hockberger et al., 2018). Another important issue is that the contribution of RI to the research project is mostly found in the full text of publications, usually in sections named *"Materials and* Methods", *"Experimental Setup"*, or similar. This means that abstract and indexing databases such as PubMed, Scopus or *Web of Science*, which don't index the full text of records, are of limited help in this scenario. Other approaches, such as assigning persistent identifiers to scientific instruments and reference them in the manuscript (Stocker et al., 2020), while in principle effective for new publications, require widespread adoption among publishers, as well as time and effort to create a database of equipment records. For all these reasons, the identification of links between publications and RI remains largely a manual and inefficient task (Strubczewski, 2019). In this work we present a solution to the problem of identifying and linking RI in the text of scientific publications, introducing a pipeline designed to connect scientific publications with RI utilized ![1_image_0.png](1_image_0.png) in the respective works. To the best of our knowledge, this is the first comprehensive solution to tackle this challenge end-end, and to be brought in a production environment. The pipeline of the solution utilizes state-of-the-art few shot learning algorithms to train our machine learning models using a limited labeled dataset. By employing these cutting-edge techniques, we were able to achieve very insightful results for research stakeholders, despite the constraints of a small training set. The remaining of the paper is organized as follows; the key user problems that this solution is addressing are discussed in Section 2. Section 3 provides an overview of the solution's architecture, followed by a detailed description of each module in Section 4. Section 5 presents evaluation results and analysis. Section 6 highlights concrete business impact of the proposed system and in Section 7 we conclude and provide pointers to future work. ## 2 Description Of User Problems Working with academic institutions and funding agencies with a specific interest in RI, we have identified several use cases where being able to link RI (as an input to research) with research outputs (in particular scientific publications) can provide valuable insights to a broad range of stakeholders such as policy makers, academic leaders, researchers and technical staff, as well as the general public. Here are some representative use cases: (1) Supporting decision making processes and investment planning about RI with a quantitative, evidencebased approach that complements qualitative insights based on expert opinion; (2) Showcasing RI to attract top talents. Institutions can promote themselves as a destination for the best researchers in various fields by showcasing state-of-the-art instruments available at their research facilities; (3) Promoting collaboration at local, national and regional level, as well as across disciplines and sectors (for example academic-corporate collaborations); (4) Supporting Open Science and Big Science, by promoting transparency and accountability, particularly for large RIs; (5) Improving the reproducibility of research, by providing useful information about the equipment used in research works. ## 3 Overview Of The Proposed Solution This section provides an overview of the proposed solution, including the input requirements and desired output of the pipeline. We also discuss the dataset collected for training the NLP components. An overview of the solution's pipeline is illustrated in Figure 1. The first part of the pipeline (unsupervised provess) is composed of fully automated modules, while the latter (supervised process) requires supervision by a Subject Matter Expert (SME) to guarantee high-quality results. ## 3.1 Input Specification The system requires two inputs: (i) the text of academic publications, to identify mentions of RIs, and, (ii) a knowledge graph of RIs, which is a list of equipment with specified attributes. This list can be customized to a specific research center/university by using their research information management system (e.g., PURE2) or equipment/lab management systems (e.g., ClusterMarket3). 3.1.1 Full text of publications The system can handle input text several formats, namely plain-text, XML, and PDF. The section tags of the XML files, when available, can be also utilized for selecting specific sections to process. To transform PDF files into plain-text, the *TIKA* Python library was employed (Apache Software Foundation, 2021). ## 3.1.2 Ri Knowledge Graph A *Research Infrastructure Knowledge Graph* (KG) is also required, to formalize the representation of participating research institutions, their facilities, equipment vendors and equipments. The equipments facet is organized in broad categories such as Measuring equipment, hosting a poly-hierarchy of equipment types, such as *Spectrophotometer*, with equipment models as leaf nodes e.g., NanoDrop ND-1000. Each equipment model is linked to: their equipment type(s), the facility and research institutions they are located in, their vendor, the original research institution's local identifier, and, their related method (e.g., *Spectrophotometry* in our previous example). The KG has been built iteratively and is updated frequently based on customers' needs. After an initialization based on generic lists of equipment types used in the first participating universities from a pilot that was conducted, each customer gives us their actual list of equipment models and types and we place them accordingly in the KG: for each customer, we expand the equipment types hierarchy, using sub-string matching and transformer based clustering methods (using *BERT*) to identify where to automatically place the new RI instances. ## 3.2 Output Specification The final output of the pipeline is a table connecting RIs to relevant publications (e.g., DOIs). It's important to note that the relationship between RIs and publications is *many-to-many*, meaning that a single RI can appear in multiple studies and multiple RIs can be used in one study. The resulting 2https://www.elsevier.com/solutions/pure 3https://clustermarket.com/ table can serve as the foundation for different dashboards, analyses and decision support systems. ## 3.3 Datasets The lack of widely available training data for the NLP modules of the pipeline, and the cost of compiling large new data sets for the task has lead us to assemble a small dataset to use in a few shot learning fashion. We used 103 research publications, with 78 being held for training and 25 for model evaluation. To train and test the sentence classifier, all sentences in these publications containing at least one RI were labeled as positive and the rest as negative. However, this resulted in a heavily biased dataset, with less than 3% of the sentences being labeled as positive. The final dataset comprises more than 14K sentences, two-thirds of which were utilized for training the models, while the remaining third was reserved for evaluation purposes. To train the Named Entity Recognition component, we used 354 sentences with annotations provided at the word level. This dataset includes 494 RIs. Using the tokenization process described in subsection 4.1, each word-token was matched to its corresponding label in BIO format (Ramshaw and Marcus, 1999). ## 4 Modules Of Proposed Solution This section provides the details of each of the components in the pipeline illustrated as blue boxes in Figure 1. ## 4.1 Sentence Splitter The initial step in identifying mentions of RIs is to split the full text of a publication into sentences. We use the *Stanza* Python library (Qi et al., 2020) for sentence splitting, as it has very high reported accuracy, but slow processing time. This choice is crucial for correctly identifying RI mentions, as RI names often contain punctuation that could lead to incorrect results with regular expression methods. Additionally, *Stanza* considers not only punctuation, but also contextual meaning, making it more precise, which comes at the cost of slower processing speed compared to other approaches. ## 4.2 Sentence Classifier The next step is to identify sentences that discuss the usage of RIs using the trained sentence classifier. This step is crucial as not all references to RIs are related to their usage in the current research; for example, authors may compare their work with others'. For the sentence classification objective a *BERT* for sequence classification, namely SciBERT-baseuncased (Beltagy et al., 2019) pretrained model (Devlin et al., 2018; Wolf et al., 2019) was used under a contrastive loss (CL) objective (Gunel et al., 2020). The sentence classifer attempts to differentiate between samples that not only contain an RI, but also express usage of an RI as context. This improves the overall precision of the model as well as providing valid predictions for the NER module. For the loss, it is known that cross entropy by itself is a weak measurement of loss in a few-shot set-up where labeled data is limited (Dodge et al., 2020; Zhang et al., 2020), thus, we used a normalized summation of the *Cross Entropy loss* and *Supervised Contrastive loss*. Contrastive loss is a technique used in few-shot learning to train models by maximizing similarity between the representations of samples from the same class and minimizing similarity between the representations of samples from different classes. The model was trained for 20 epochs with a batch size of 64, on a 70 : 30 split. Learning rate was 1e − 5. Input tokenized vector size per sentence was of maximum length 128. The contrastive loss setting temperature was set to 0.3 and the λ parameter to 0.9. ## 4.3 Ner Once the sentences discussing the usage of RIs in the research have been identified, a NER component is employed to extract of the RI entities within these sentences. For the entity detection objective a *BERT* for token classification, namely bert-baseuncased pretrained model with a similar contrastive loss objective per above, was used. In this case, the contrast is introduced on the word-token level of the sentence. While the data are scientific publications, parts of the name of an RI can also be found in the common language and there is no base rule for referencing it. It is important to identify, based on the context, which token belongs to an RI and group them together. The larger and more diverse training corpus of BERT-base makes it more sensitive to a broader range of linguistic pattern and contexts over SciBERT which is exclusively trained on computer science and biomedical publications. Empirically, BERT-base was a better candidate for fine-tuning on the downstream NER task due to the representation capturing general language knowledge, despite SciBERT outperforming it in various benchmarks. As for the word level tokens, the B token in this case not only assists in not confusing the individual RIs that were found but also visualizes the model's behaviour on the boundaries it identifies between tokens surrounding the RI, which is done with the assistance of contrastive loss. This model was trained for 28 epochs with a batch size of 8. Input tokenized vector size per sentence was of maximum length 128. Learning rate was 5e − 5.The contrastive loss setting temperature was set to 0.5 and the λ parameter to 0.8. ## 4.4 Clustering Of Annotations The application of the sentence classifier and NER modules on the input documents results in a large number of mentions of RIs. Due to the variety by which authors cite or quote the equipment used, some of these mentions may match the official names of RIs in a provided supplied equipment list, while others may not. To accurately match these alternative names to a specific RI, we apply a clustering algorithm to group them together. A three-step divide and conquer strategy has been developed to guarantee the correctness of the clustering process. By doing this, we can make sure that references of RIs with different vendor names or model numbers do not fall into the same cluster. This approach is as follows: 1. Group all mentions based on the vendor name mentioned in them. There is also a separate group for mentions without a vendor name. 2. Group the items within each group from step 1 based on the longest word token that contains at least one digit. This substring usually represents the equipment's model number. 3. Each group from step 2 is clustered using *Kmeans* clustering on the *TF-IDF* representation of the mentions. To find the optimal number of clusters in each group, the *Silhouette* score is maximized. An example of resulted clusters is presented in table 1 ## 4.5 Knowledge Graph Enrichment The equipment list from a given university, mapped to our KG's unique identifiers, is used to identify mentions of these pieces of equipment in research | mentions of RI transmission electron microscope (TEM, JEOL JEM-2010) JEM-2010 microscope (JEOL, Japan) JEOL JEM-2010 electron JEOL JEM-2010 electron microscope High-resolution transmission electron microscopy (HR-TEM) system (JEOL, JEM-2010) JEOL JEM-2010 Selected area electron diffraction (SAED, JEOL. JEM-2010, 200.0 KV) JEOL JEM-2010 transmission electron microscope | Tag | Precision | Recall | F1 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------------|----------|------| | phrase level. | | | | | Tag Precision Recall F1 O 0.88 0.88 0.88 B-EQ 0.93 0.94 0.93 I-EQ 0.98 0.97 0.98 Macro average **0.93 0.93 0.93** EQ (phrase level) 0.76 0.77 0.77 Table 3: Performance of NER model at tag level and phrase level. Table 1: An example cluster of mentions of RI. Table 2: Performance of the Sentence classifier model. | Accuracy | Precision | Recall | F1 | |------------|-------------|----------|---------| | 0.99227 | 0.86734 | 0.7798 | 0.82125 | articles. These mentions are clustered as described in 4.4. Each cluster is carefully reviewed by a Subject Matter Expert and possibly edited before being added to the KG as synonyms for the equipment data point it is mapped to. Our latest KG version contains over 1, 500 pieces of equipment (types and models) and over 2, 500 vendors. ## 4.6 Annotation For extracting the list of RI mentions in a given document set we combine three sources of equipment names into a single vocabulary for text matching: (i) reference vocabulary, cf. 3.1.2; (ii) terms found in the input texts, cf. 4.4; (iii) user-submitted list of equipment of interest. Research institutions or funders have an interest in tracking the usage of equipment that they own or manage and will submit a list of RI that reflects that interest. This list will be matched against the enriched vocabulary, resulting in a final reference list of RI, containing many name variants, and formatted in a way suitable for use in the annotation tool; we employ the annotation tool FPS (Fingerprint Services) that is described by Kohlhof et al. (2014). Applying the RI annotation as a final stage to the process accomplishes several things: (1) it integrates the knowledge accumulated in previous stages; (2) drawing on the FPS capabilities, it allows us to influence recall and precision; (3) it results in a list of consumer-relevant data linked to the right identifiers; (4) applying to specified parts of the full-text documents we can evaluate the quality of the annotation tool for different scenarios, i.e., when applied to "positive" sentences only, when applied to specific text sections only (as explained in 3.1.1 having the input publication in XML format enables us to focus on specific sections, like *Material and Methods*), or when applied to whole text for maximum recall. ## 5 Performance Evaluation 5.1 Inference Time Tested in a sample of 120k full text scientific publications, the total inference time for the complete pipeline, by means of aggregating the inference times for the sub-modules of sentence splitting, sentence classification, NER and clustering of annotations, results to 35.5 hours in a *g4dn.2xlarge* Amazon EC2 instance. The majority of the inference, amounting to 83% is taken up by the first two modules, while NER needed 30 minutes (1.4%) of total elapsed time to complete the processing of all documents. ## 5.2 Precision And Recall Of Modules 5.2.1 Sentence Classifier The combination of a scientific BERT model with the contrastive loss assists the sentence classification model to capture the context of RIs utilization. In production state this model parses millions of sentences averaging similar metrics. In Table 2 we present the overall performance of model, as this was measured on our hold-out test set. ## 5.2.2 Ner Despite the low number of training samples, the NER model with its contrastive nature is able to generalize with very satisfactory performance. The performance in the tags of interest is high enough so that the full extraction of a RIs can be done with a simple post processing of the NER's output. The performance of the NER module in our hold-out test set is shown in Table 3. ![5_image_0.png](5_image_0.png) ## 5.2.3 Discussion On The Overall Performance The performance of the individual models has still room for improvement. In the two previous sections we presented the performance of two of the key components, which is the sentence classifier and the NER. The majority of the issues we observed in a manual error analysis results from the poor generalization of the models in capturing all possible ways of how a RI is reported and discussed in a scientific paper. The sentence classifier component's nature is to lighten the processing load on the NER component. i.e., instead of processing all sentences of a scientific publication for RI entities, to only focus on the ones that the sentence classifier believes they discuss the usage of RI. This additional step also introduces some errors; however, even with an imperfect sentence classifier the NER is still able to distinguish the proper mentions of RI as seen by the high scores in Table 3. Taking into account the phrase level score, it should be highlighted that the NER task is more difficult compared to the conventional NER tasks with common entities like PER/ORG/LOC. In an industrial setting, we have found that the aforedescribed performance is already sufficient to address business use cases and generate very meaningful insights for the RI stakeholders, examples of which we share in the next section. 6 Business Impact We have completed several projects with institutions active in the Science, Technology, Engineering and Mathematics (*STEM*) domains. We focused primarily on the first use case of those listed in section 2: we helped institutions evaluate the impact of their RI investments by providing quantitative evidence based on a scientometric analysis of publications enabled by institutional RI. Those insights include: (1) the contribution of RI to the scientific output in a certain topic; (2) the scientific impact of publications enabled by RI, compared to the institutional average; (3) the scientific impact of a specific facility or lab inside the institution; (4) the impact on innovation that is enabled by a certain technology available at the institution, and, (5) the role of institutional RI on collaborations with corporate entities. Charts in Fig. 2, which are taken from a report that was done for one of the pilot institutions, illustrate how these insights can be derived using our system. The evaluation of scientific impact is routinely done by analysing citation networks, and several metrics have been developed for this purpose (Waltman, 2016). The chart in Fig. 2a compares for a specific facility the citation impact of publications from the entire institution, with the subset of publications linked to RI and with a subset of publications linked to equipment. The X axis measures the citation impact of the journals hosting the publications; the Y axis reports the direct citation impact of the publications. Both metrics are sizeindependent and normalized. Figure 2b shows that RI is a net contributor to the scientific impact of the institution, as captured by both metrics, as well as how research enabled by equipment from a specific facility has a much higher ratio of patent citations than the institutional average. ## 7 Conclusions And Future Work In this paper we have presented a novel system that can detect, extract and link the Research Infrastructure (RI) used and mentioned in scientific publications. The system comprises several advanced NLP components that can annotate and classify sentences, as well as detect RI entities and link them to a knowledge graph (KG) that has been created for the purpose of this business application. We have discussed the performance of the key individual components of our system, the use cases that the proposed solution can address, and we have demonstrated the insights and knowledge that any research facility or institution can obtain around the impact and Return of Investment of their equipment in research conducted by its personnel. Our future work will focus on expanding and releasing the KG in public, as well as optimizing the paralelization and scaling of the existing pipeline. ## References Apache Software Foundation. 2021. Apache tika. https://tika.apache.org/. [Software]. Yuzhou Bai and Roger Schonfeld. 2021. What is a research core? a primer on a critical component of the research enterprise. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. *arXiv* preprint arXiv:2002.06305. European Commission, Directorate-General for Research, and Innovation. 2012. *Developing worldclass research infrastructures for the European Research Area (ERA) : report of the ERA Expert Group*. Publications Office. European Commission, Directorate-General for Research, and Innovation. 2019. *Research infrastructures make science happen*. Publications Office. Elena Griniece, Jelena Angelis, Alasdair Reid, Silvia Vignetti, Jessica Catalano, Ana Helman, Matias Barberis Rami, and Henning Kroll. 2020. Guidebook for Socio-Economic Impact Assessment of Research Infrastructures. Beliz Gunel, Jingfei Du, Alexis Conneau, and Ves Stoyanov. 2020. Supervised contrastive learning for pretrained language model fine-tuning. *arXiv preprint* arXiv:2011.01403. Philip Hockberger, Jeffrey Weiss, Aaron Rosen, and Andrew Ott. 2018. Building a sustainable portfolio of core facilities: a case study. Journal of Biomolecular Techniques : JBT, 29:79–92. I. Kohlhof, B. Kozlov, and M. Doornenbal. 2014. Activating qualified thesaurus terms for automatic indexing with taxonomy-based wsd. Computational Linguistics in the Netherlands Journal, 4:17–28. OECD. 2019. Reference framework for assessing the scientific and socio-economic impact of research infrastructures. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. *Natural language processing using very large corpora*, pages 157–176. Markus Stocker, Louise Darroch, Rolf Krahl, Ted Habermann, Anusuriya Devaraju, Ulrich Schwardmann, Claudio D'Onofrio, and Ingemar Häggström. 2020. Persistent identification of instruments. *Data Science* Journal, 19. Noelle Strubczewski. 2019. Shared resource facility market analysis. UKRI. 2022. Ukri corporate plan 2022 to 2025. Ludo Waltman. 2016. A review of the literature on citation impact indicators. *Journal of Informetrics*, 10(2):365–391. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. *arXiv preprint arXiv:2006.05987*.
zhang-etal-2023-event
Event-Centric Query Expansion in Web Search
https://aclanthology.org/2023.acl-industry.45
In search engines, query expansion (QE) is a crucial technique to improve search experience. Previous studies often rely on long-term search log mining, which leads to slow updates and is sub-optimal for time-sensitive news searches. In this work, we present Event-Centric Query Expansion (EQE), the QE system used in a famous Chinese search engine. EQE utilizes a novel event retrieval framework that consists of four stages, i.e., event collection, event reformulation, semantic retrieval and online ranking, which can select the best expansion from a significant amount of potential events rapidly and accurately. Specifically, we first collect and filter news headlines from websites. Then we propose a generation model that incorporates contrastive learning and prompt-tuning techniques to reformulate these headlines to concise candidates. Additionally, we fine-tune a dual-tower semantic model to serve as an encoder for event retrieval and explore a two-stage contrastive training approach to enhance the accuracy of event retrieval. Finally, we rank the retrieved events and select the optimal one as QE, which is then used to improve the retrieval of event-related documents. Through offline analysis and online A/B testing, we observed that the EQE system has significantly improved many indicators compared to the baseline. The system has been deployed in a real production environment and serves hundreds of millions of users.
# Event-Centric Query Expansion In Web Search Yanan Zhang1∗ Weijie Cui2∗ Yangfan Zhang1 **Xiaoling Bai**1† Zhe Zhang2 Jin Ma2 Xiang Chen1 **Tianhua Zhou**1 1Tencent Inc. 2University of Science and Technology of China {yananzhang, devinbai}@tencent.com, can@mail.ustc.edu.cn ## Abstract In search engines, query expansion (QE) is a crucial technique to improve search experience. Previous studies often rely on longterm search log mining, which leads to slow updates and is sub-optimal for time-sensitive news searches. In this work, we present EventCentric Query Expansion (EQE), a novel QE system that addresses these issues by mining the best expansion from a significant amount of potential events rapidly and accurately. This system consists of four stages, i.e., event collection, event reformulation, *semantic retrieval* and *online ranking*. Specifically, we first collect and filter news headlines from websites. Then we propose a generation model that incorporates contrastive learning and prompt-tuning techniques to reformulate these headlines to concise candidates. Additionally, we fine-tune a dual-tower semantic model to function as an encoder for event retrieval and explore a two-stage contrastive training approach to enhance the accuracy of event retrieval. Finally, we rank the retrieved events and select the optimal one as QE, which is then used to improve the retrieval of event-related documents. Through offline analysis and online A/B testing, we observe that the EQE system significantly improves many metrics compared to the baseline. The system has been deployed in Tencent QQ Browser Search and served hundreds of millions of users. The dataset and baseline codes are available at https: //open-event-hub.github.io/eqe. ## 1 Introduction People are always eager to obtain details and updates on current hot events through search engines. To efficiently return dozens of relevant documents from billions of candidates, most search engines use a "retrieval-rank-rerank-mixed rank"architecture, as illustrated in Figure 1. ∗ Equal Contributions † Corresponding author ![0_image_0.png](0_image_0.png) Figure 1: Overview of the query expansion process and search system in Tencent QQ Browser Search. Queries, particularly those relying on keywords, present a tough challenge for query intent understanding due to their brevity, absence of world knowledge, and lack of grammatical structure (Broder et al., 2007). When a significant event takes place, the search intent of users subsequently can rapidly and drastically shift. For example, during the Green-Poole conflict, a user searching for "green" may be seeking information about the color, while many others desire news about NBA player Draymond Green. While methods based on search log mining (Jansen et al., 2007; Zamora et al., 2014; Caruccio et al., 2015) are still commonly used for query intent understanding, they are limited by their reliance on the accumulation of posterior data and struggle with timely and accurately processing the intent for recent events, making it difficult to retrieve and rank event-related documents. Recent approaches (Zhang et al., 2020a; Nogueira et al., 2019; Sun et al., 2022) suggest using additional context from query-associate documents or entities to improve the performance of query understanding. However, they still face challenges in real-time search scenarios. To tackle this challenge, we present EQE, a real-time query expansion system specifically designed to efficiently capture query intent for ongoing events. As depicted in Figure 1, EQE extends the original query with the most fulfilling event, selected from a large pool of candidate events. By performing the same retrieval step with both the original and expanded queries, more results related to the event will be returned. This bypass architec464 ![1_image_0.png](1_image_0.png) ture effectively ensures system flexibility. When there are sufficient machine resources, the number of bypasses can be increased, and multiple query expansions can be used to improve the performance of document retrieval. Our EQE system employs a four-stage structure, as illustrated in Figure 2, consisting of event collection, event reformulation, semantic retrieval, and online ranking. Events are collected from news headlines as they are typically more concise and event-centric, compared to body texts which are lengthier and contain extraneous information. To guarantee the accuracy of the collected events, we employ a combination of rule-based coarse filtering and language model-based fine filtering (§ 2.1). The collected headlines, as described in Appendix A, may contain noise, irregular grammar, and lack of world knowledge, making them unsuitable for query expansions. To solve these problems in such scenarios, we reformulate them using a generation model, which is more effective than extractive models (§ 2.2). Our method employs keyword-based prompt learning to make generated content more controllable and applies contrastive learning on the encoder to counteract embedding degradation (Gao et al., 2019). After this step, we obtain a high-quality candidate set of event-centric QE. For a given query, to further narrow down the QE candidate set, we utilize a supervised SimCSE model (Gao et al., 2021) to retrieve relevant QEs. SimCSE effectively improves the accuracy of retrieval by addressing the issue of representation space degradation. Moreover, inspired by (Gillick et al., 2019), we employ a two-stage training approach that incorporates informative hard negative samples for each query, resulting in a further improvement in representation quality (§ 2.3). Finally, we design an online ranking module to select the best QE. Features of query-side, event-side, and interactive are considered comprehensively (§ 2.4). As far as we know, EQE is the first query expansion solution developed explicitly for real-time event intent. The efficiency of EQE is verified through offline analysis and online A/B testing. The main contributions of this work are summarized as follows: - We propose a real-time and efficient query expansion system for timely search scenarios. The system comprises four stages: event collection, event reformulation, semantic retrieval, and online ranking. - In the event reformulation stage, we introduce an effective generation model that leverages prompt learning and contrastive learning techniques to produce a high-quality candidate set of QE. - In the semantic retrieval stage, we employ a two-stage contrastive learning approach to improve the accuracy of semantic retrieval. - Offline analysis and online A/B testing on Tencent QQ Browser Search demonstrate the effectiveness of our proposed EQE framework. ## 2 Method In this section, we describe our proposed framework of EQE shown in Figure 2. We first introduce the scheme for event collection in industrial scenarios. We then elaborate on event reformulation and semantic retrieval, describing how we use contrastive learning and prompt learning to improve model performance. Finally, we discuss online ranking, revealing how to select the optimal expansion. ## 2.1 Event Collection The essential phase in the "Event Collection" process is to identify events from the vast amount of newly uploaded content on the Internet. We filter events from the headlines using a two-step method that includes a rule-based coarse filter and a semantically-driven fine filter. Coarse Filter. After using basic feature filters such as publication time, page type, site type, etc., we gather approximately 50 million news article headlines over the duration of recent six months. As described in Appendix E, the headlines generated by these heuristic rules include irregular syntax, missing event components, numerous events, etc., posing a barrier to subsequent event reformulation. So further we use the LTP toolkit (Che et al., 2021) to extract event triggers from headlines and drop headlines missing event elements or with multiple events (the number of triggers more than 2). Fine Filter. The rule-based coarse filter is based on pre-defined patterns and has limited recognition abilities. To address these issues and further improve the accuracy of event collection, we train an event detection model based on RoBERTa (Liu et al., 2019). We employ six experts to annotate around 200,000 samples, which are utilized to train the model. Subsequently, we use this model to infer event probabilities for the coarsely filtered headlines and filter them using a predefined threshold, achieving a 95% accuracy rate in detecting event-related headlines. ## 2.2 Event Reformulation This step aims to make events qualified for query expansion by reformulating them using a generation model, addressing issues such as noise, irregular grammar, and low-frequency words. As illustrated in Figure 3, we introduce two significant improvements to the encoder-decoder architecturebased model. Firstly, we enhance the controllability of the generation process using prompt learning. Secondly, we optimize the representation quality of headlines using contrastive learning. By simultaneously optimizing these two tasks, we can effectively refine events for query expansion. Prompt Guidance. To ensure important information is not overlooked, we leverage prompt learning technology when training a generation model. Unlike prior work, we propose adaptive keyword templates to provide guidance during sentence generation. Firstly, we use the KeyBERT model (Grootendorst, 2020) to extract the most essential nouns from the sentence. We then insert the extracted keyword into a fixed template to form a keyword template, denoted as T. Finally, we concatenate the headline H, the keyword template T, and the target qualified event E with special ![2_image_0.png](2_image_0.png) tokens as "[CLS]H [SEP][CLS]T E[SEP]" 1. In this setup, the front segment "[CLS]H[SEP]" and the latter segment "[CLS]T E[SEP]" serve as the inputs for the encoder and decoder, respectively. It is worth noting that E is omitted during the inference phase. Contrastive Learning. Previous studies show that natural language generation tasks suffer from representation space degradation problems, which can be alleviated by contrastive learning (Gao et al., 2019). In our model, the embedding corresponding to the [CLS] token of the encoder is regarded as the headline representation and contrastive learning is performed based on it. Specifically, for each headline, we perform a position swap of its two terms to obtain a positive example headline and then use contrastive learning to pull the representations of positive headline pairs closer and push away the representations of negative headline pairs (i.e., in-batch negative samples). The following is a description of the contrastive learning loss calculation process within a batch. (a) In a batch of size 2N, the training data consists of 2N pairs denoted by {(H1, E1), (H + 1 , E1), · · · ,(HN , EN ),(H + N , EN )}, where < Hi, H+ i > denote a pair of similar headlines, both of which can be paired with the event phrase Ei. Contrastive learning aims to pull semantically close neighbors (i.e. (Hi, H+ i )) together and pushing apart nonneighbors (i.e. (Hi, H+ j ), i, j ∈ {0, 1*, ..., N*} 1For the BART model, the start token ID for the decoder is [CLS], while for the mT5 model, it is [PAD]. and j ̸= i). (b) The above 2N samples are passed through the encoder to obtain 2N embeddings that are denoted as (e1, e2*, .., e*N ; e + 1 , e+ 2 , ..., e+N ). (c) The 2N embeddings are used to compute the contrastive learning loss, which is included as part of the loss for this mini-batch. Let τ denote the temperature hyper-parameter and sim(e1, e2) denote the cosine similarity. Then the contrastive learning loss, denoted as Lcl is: $$L_{c l}=-\sum_{i=1}^{N}\log\frac{e^{\sin(e_{i},e_{i}^{+})/\tau}}{\sum_{j=1}^{N}e^{\sin(e_{i},e_{j}^{+})/\tau}}$$ The training data for the Event Reformulation stage consists of pairs <headline, *event*> that are sampled from user-click logs. Specifically, we employ two methods to construct the target event: regarding the user query as the target event and extracting the event from the headline using the LTP toolkit. To ensure that the constructed events are qualified as the target for the generation model, we further filter out pairs that do not meet our standards for loyalty, integrity, and cleanness through expert annotation. ## 2.3 Semantic Retrieval In this step, we use a dual-tower semantic model based on contrastive learning to retrieve highly satisfying events for a given query. The work of Xiong et al. (2021) points out that the dominance of uninformative negative samples leads to a bottleneck in the recall system, therefore, we employ a novel two-stage training paradigm. For a given query, the positive samples are events obtained from the reformulated events of its clicked headlines. Then we use features, such as Jaccard Distance (Jaccard, 1901), BERTScore (Zhang et al., 2020b), etc., to only keep relevant pairs as training samples. In the first stage, we use these positive samples with shuffled negative ones to finetune a naive dual-tower retrieval model and obtain the encoder. The weights of the model are initialized using the parameters of RoBERTa. After finetuning, we build an event vector library with more than 4 million entries using this encoder. For a given query vector, based on Faiss (Johnson et al., 2019), the top-K events are recalled according to the cosine similarity. Events located at the upper and lower thresholds of the threshold are regarded as hard neg samples, where the bounds are pre-defined by distribution statistics. In the second stage, we replace the randomly shuffled negative events with hard negative events and retrain the model initialized with the encoder obtained from the first stage. This results in the final retrieval model. ## 2.4 Online Ranking Actually, selecting the optimal expansion requires considering multiple factors, such as relevance, event popularity, and timeliness. Therefore, we use the classic light-weight sorting model GBDT (Friedman, 2001), which is compatible with the interpretation of online features. We incorporate three types of features to build the model: query-side, event-side, and interaction. Queryside features encompass query domain classification, entity recognition, word segmentation, and word weighting, among others, generated by existing online operators. Event-side features involve event found time and event popularity (the size of the cluster to which an event belongs). The interaction features include semantic similarity, BM25 (Robertson and Zaragoza, 2009), and entity matching between the query and event. We describe the method of collecting training samples. For each query, we input the events obtained from the previous stage into the online search engine to obtain the search results pages. Then, we select the page that best satisfies the event intent of the query through expert annotation, and its corresponding event is labeled as a positive sample for the query, while the other events are labeled as negative samples. We obtain 50,000 samples, which are used to train the GBDT model for inferring the best query expansion. ## 3 System Architecture In this section, we describe our baseline and EQE architecture in detail. ## 3.1 Baseline Approach We first take a glance at our QE baseline, which is a query graph analysis framework. We devise a Query-Document click graph G based on the click propagation algorithm (Jiang et al., 2016). In order to prioritize time-sensitive queries, we limit our analysis to click-pairs from news websites that occurred within a 3-day window. To mitigate the risk of irrelevant results, we integrate BM25 score to the adjacency matrix of the graph, denoted as C where each entry Ci,j is the weight of the edge between query qi and document dj , specifically formulated as: $$C_{i,j}=\begin{cases}\alpha\cdot\text{BM25}(q_{i},d_{j})+1,&\text{with edge}\\ 0,&\text{no edge}\end{cases}\tag{1}$$ where α (set to 0.2) is a smoothing coefficient. Representations of queries and documents are iteratively updated according to Eq. (2) and Eq. (3), respectively. $$Q_{i}^{(n)}=\frac{1}{\left\|\sum_{j}^{|Doc|}C_{i,j}\cdot D_{j}^{(n-1)}\right\|_{2}}\sum_{j=1}^{|Doc|}C_{i,j}\cdot D_{j}^{(n-1)}\tag{2}$$ $$D_{j}^{(n)}=\frac{1}{\left\|\sum_{i=1}^{|Query|}C_{i,j}\cdot Q_{i}^{(n)}\right\|_{2}}\sum_{i=1}^{|Query|}C_{i,j}\cdot Q_{i}^{(n)}\tag{3}$$ where $Q_{i}^{(n)}$ and $D_{i}^{(n)}$ are the representations of $n$ and $n$. qi and di at the n-th iteration respectively. After the n-th iteration, we perform clustering on Q (n) i to obtain the query clusters. For each cluster, we select the most frequent query as the expansion of other queries. ## 3.2 Eqe System Figure 2 illustrates the EQE system, which can be divided into two parts: offline and online. The offline system sequentially processes streaming data from the internet, performing event collection, event reformulation, and Faiss indexing for fast response. These steps can be processed in parallel. In the online part, when a query arrives, a GBDT ranking model selects the top-1 candidate as the query expansion based on rich features. Furthermore, a rapidly updated caching system stores pairs of <query, top-1 *expansion*> to intercept requests to meet the time-consuming demands. Upon receiving a new query request, the system first seeks a pre-prepared QE in the cache. If not found, the system initiates further retrieval and ranking modules to obtain the QE. This QE is then returned to the main search system, while the <query, *expansion*> pair is written into the cache for any subsequent identical query requests. On the other hand, if a match is found, the cached result is immediately returned to the main search system. Concurrently, the caching system undergoes an asynchronous update in preparation for future requests. The implementation of an asynchronous execution pipeline does not boost the response delay of the mainstream search process. Therefore, the response time of the popular query is mainly | Methods | Recall@100 | Recall@150 | Recall@200 | |-----------|--------------|--------------|--------------| | Baseline | 0.41 | 0.47 | 0.58 | | EQE | 0.58 | 0.65 | 0.74 | | Improve | + 41.46% | + 38.29% | + 27.58% | consumed by querying the caching system. Only the stages of retrieval and ranking executed for infrequent queries lead to an increase in the response time of the search engine. Finally, EQE covers nearly 50% of online traffic, while the other half, such as those requiring explicit knowledge, has already been addressed by other intent understanding modules, is therefore not considered within this scope. Online data indicates that the query expansion system elevates search latency by only 10 ms, evincing the efficacy of the proposed module. ## 4 Experiments We conduct a series of comprehensive evaluations, both in offline and online environments, incorporating quantitative and qualitative aspects, to prove the advantages of EQE. ## 4.1 End-To-End Evaluation Firstly, we present the results of the implementation of the EQE system online, taking into account both offline and online metrics. Offline Evaluation. We measure the offline performance of EQE using *Recall*@K metric. As illustrated in Eq. (4), given a query Q, the clicked documents by users are denoted as T = {t1*, . . . , t*N }, which are regarded as as the target. The top-K documents set recalled by the QE module is denoted as I = {i1, . . . , ik}. *Recall*@K is defined as: $\sum_{i=1}^{K}i_{i}\in T$ $\sum_{i=1}^{K}i_{i}\in T$ $\sum_{i=1}^{K}i_{i}\in T$ $\sum_{i=1}^{K}i_{i}\in T$ $\sum_{i=1}^{K}i_{i}\in T$ We first collect user click-log over a certain period of time, where documents are retrieved by original queries without the influence of QE. Specifically, in our scenario, we collect news, videos, and usergenerated content (UGC). Meanwhile, we record documents retrieved by both expansions produced by EQE and the baseline approach. After 7 days of accumulation, a total of 850,000 valid online requests are collected. As shown in Table 1, after evaluating *Recall*@K at different thresholds, it can be seen that EQE significantly surpasses the online baseline. | ∆GSB | CTR | PCTR | UCTR | | |--------|--------|--------|--------|--------| | EQE | +12.5% | +6.64% | +6.23% | +5.03% | Table 2: Online A/B test of EQE implemented. Online Evaluation. We construct a 30-day A/B experiment with 1% of online traffic to gather feedback from millions of users and study the online performance of the EQE compared to the strong baseline described in Section 3.1. QEs derived from both frameworks are utilized in downstream tasks (document retrieval and sorting). For online evaluation, we are mainly concerned about the business metrics that impact user experience, such as ∆GSB (Zou et al., 2021), CTR (Rangadurai et al., 2022), PCTR and UCTR (Qin et al., 2022). As shown in Table 2, EQE outperforms the baseline and gains improvements of 6.44%, 6.23% and 5.03% on CTR, PCTR and UCTR, respectively, indicating its SOTA performance. ## 4.2 Performance Of Event Collection We choose the intersection of "Hot Search List" from various platforms as our evaluation set. This decision serves two purposes: firstly, it can eliminate unfair comparisons due to platform-specific content biases; secondly, these events are highly representative in the search domain, as users consistently demonstrate in them and desire to retrieve relevant information swiftly. We employ two annotation experts to assist in the evaluation process, which involved: 1) Collecting these events from different platforms (such as Baidu and Weibo) to find the earliest time they appeared respectively. Admittedly, since we cannot accurately determine the initial creation time of these events on other platforms, we resort to the first appearance time on the "Hot Search List" as an approximation; 2) Identifying the time when the first publish emerged on the Internet; 3) Recording events discovery coverage rate at several time points. Figure 4 illustrates the average coverage of Baidu, Weibo, and EQE at different time points after "Hot Search List" events occurred. The coverage of each system is recorded at time points of 1, 2, 5, 10, 15, and 20 minutes. As shown, at the 5-minute time point, EQE discovers more than 10% of the events in advance compared to the other systems. ![5_image_0.png](5_image_0.png) Models Rouge-L BLEU BERTScore BART (vanilla) 0.8391 0.7692 0.9266 BART + CL 0.8406 0.7724 0.9278 BART + PG 0.8458 0.7777 0.9294 BART + CL + PG **0.8480 0.7822 0.9312** mT5 (vanilla) 0.8453 0.7781 0.9297 mT5 + CL 0.8489 0.7833 0.9315 mT5 + PG 0.8511 0.7857 0.9322 mT5 + CL + PG **0.8533 0.7897 0.9336** ## 4.3 Performance Of Event Reformulation We evaluate the performance of the proposed generation model by computing automated metrics. We disclose an annotated test dataset, which is called Title2EventPhrase. Production procedures and analysis of Title2EventPhrase are introduced in Appendix B and C. We conduct ablation experiments to measure the effect of the prompt and contrastive learning modules. Furthermore, to verify the universality of these two modules, we utilize BART (Lewis et al., 2020) and mT5 (Xue et al., 2021) as the backbone networks, respectively. 2 Experiments are measured with the ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) and BERTScore metrics. As shown in Table 3, our proposed components significantly improved the generation performance. ## 4.4 Performance Of Semantic Retrieval In this section, we evaluate the effectiveness of our proposed semantic retrieval model against the 2We use a popular version of the Chinese BART model available at https://huggingface.co/fnlp/bart-large-chinese, and the base version of mT5 available at https://huggingface. co/google/mt5-base. It is worth noting that the number of transformer layers in both models is consistent. | Models | Recall@10 | MRR@10 | AUC | |-------------------|-------------|----------|-------| | RoBERTa (vanilla) | 0.74 | 0.43 | 0.80 | | RoBERTa + CL | 0.75 | 0.45 | 0.82 | | RoBERTa + CL + 2T | 0.80 | 0.51 | 0.87 | Table 4: Evaluation of semantic models. CL and 2T are abbreviations for contrastive learning and two-stage training with hard neg samples, respectively. baseline by employing three definitive performance metrics: standard Recall@K (Jegou et al., 2010), MRR@K (Craswell, 2009) and AUC (Fawcett, 2006). We construct a testing dataset with a similar method to obtain <query, *event*> pairs as the training dataset mentioned in Section 2.3, with relevance labels annotated by experts. We sample them at different time periods to ensure training and testing datasets have the same distribution but are non-overlapping. Table 4 shows the advantages of our training scheme over other baseline models. In addition, we also visualize the results of a two-dimensional t-SNE (Van der Maaten and Hinton, 2008) graph on the embedding of 100,000 queries, further demonstrating the effectiveness of our proposed method in addressing the problem of representation degradation. For more details, please refer to Appendix D and Figure 7. ## 5 Related Work Query understanding (QU) is a fundamental task of information retrieval (IR), which aims to help reformulate query, predict query intent, and ultimately improve the document relevance modeling (Zhang et al., 2020a). As an essential method for QU, query expansion (QE) involves the addition of relevant terms or specific information to a query to clarify intention and enhance retrieval performance (Rosin et al., 2021). In recent years, most QE methods have been based on word embedding techniques (Srinivasan et al., 2022; Padaki et al., 2020; Azad and Deepak, 2019; Kuzi et al., 2016; Zamani and Croft, 2016), which select semantically related terms as expansions. Usually, word embeddings are learned in two ways, one is based on the semantic vector of terms and the other is based on retrieval relevance (Diaz et al., 2016; Zamani and Croft, 2017). Meanwhile, external data sources, such as Wikipedia and WordNet, have also been utilized for QE (Azad and Deepak, 2019). However, these conventional QE methods mainly rely on mining search logs or pre-built expansion libraries, which leads to slow update rates in time-sensitive scenarios. On the other hand, new occurring events in real time can meet the timeliness requirements well, and mining QE from them is a promising research direction. Recently Deng et al. (2022) construct a large-scale dataset aiming at extracting event arguments, like subject, *predicate* and *object*, from news headlines. However, structured outputs from extractive models (Lu et al., 2022; Gao et al., 2022) might not be fully utilized by the retrieval and ranking modules. We thus turn to generative models for event reformation. Normalized events serve as crucial signals for timesensitive query expansion, which makes the largest contribution to our work. ## 6 Conclusion This paper presents our solution for large-scale event-centric query expansion, called EQE, which is able to efficiently capture query intent for ongoing events and help our search engine to retrieve more event-related results. Advanced techniques are involved in each stage of EQE to improve performance. Offline experiments and online A/B tests verify the effectiveness of EQE. We have deployed the system in Tencent QQ Browser Search to serve hundreds of millions of users. Meanwhile, we share the design and deployment scheme. ## References Hiteshwar Kumar Azad and Akshay Deepak. 2019. A new approach for query expansion using wikipedia and wordnet. *Information sciences*, 492:147–163. Andrei Z Broder, Marcus Fontoura, Evgeniy Gabrilovich, Amruta Joshi, Vanja Josifovski, and Tong Zhang. 2007. Robust classification of rare queries using web knowledge. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 231–238. Loredana Caruccio, Vincenzo Deufemia, and Giuseppe Polese. 2015. Understanding user intent on the web through interaction mining. *Journal of Visual Languages & Computing*, 31:230–236. Wanxiang Che, Yunlong Feng, Libo Qin, and Ting Liu. 2021. N-LTP: An open-source neural language technology platform for Chinese. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 42–49, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Nick Craswell. 2009. *Mean Reciprocal Rank*, pages 1703–1703. Springer US, Boston, MA. Haolin Deng, Yanan Zhang, Yangfan Zhang, Wangyang Ying, Changlong Yu, Jun Gao, Wei Wang, Xiaoling Bai, Nan Yang, Jin Ma, Xiang Chen, and Tianhua Zhou. 2022. Title2Event: Benchmarking open event extraction with a large-scale Chinese title dataset. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pages 6511–6524, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Fernando Diaz, Bhaskar Mitra, and Nick Craswell. 2016. Query expansion with locally-trained word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 367–377, Berlin, Germany. Association for Computational Linguistics. Tom Fawcett. 2006. An introduction to roc analysis. Pattern Recognition Letters, 27(8):861–874. ROC Analysis in Pattern Recognition. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. In *Annals of* statistics, pages 1189–1232. JSTOR. Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and TieYan Liu. 2019. Representation degeneration problem in training natural language generation models. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Jun Gao, Changlong Yu, Wei Wang, Huan Zhao, and Ruifeng Xu. 2022. Mask-then-fill: A flexible and effective data augmentation framework for event extraction. In *Findings of the Association for Computational Linguistics: EMNLP 2022*, pages 4537–4544, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, and Diego Garcia-Olano. 2019. Learning dense representations for entity retrieval. In *Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)*, pages 528–537, Hong Kong, China. Association for Computational Linguistics. Maarten Grootendorst. 2020. Keybert: Minimal keyword extraction with bert. Paul Jaccard. 1901. Étude comparative de la distribution florale dans une portion des alpes et des jura. *Bulletin del la Société Vaudoise des Sciences Naturelles*, 37:547–579. Bernard J. Jansen, Danielle L. Booth, and Amanda Spink. 2007. Determining the user intent of web search engine queries. In *Proceedings of the 16th* International Conference on World Wide Web, WWW 2007, Banff, Alberta, Canada, May 8-12, 2007, pages 1149–1150. ACM. Herve Jegou, Matthijs Douze, and Cordelia Schmid. 2010. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine intelligence, 33(1):117–128. Shan Jiang, Yuening Hu, Changsung Kang, Tim Daly Jr., Dawei Yin, Yi Chang, and ChengXiang Zhai. 2016. Learning query and document relevance from a web-scale click graph. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, SIGIR 2016, Pisa, Italy, July 17-21, 2016, pages 185–194. ACM. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. *IEEE* Transactions on Big Data, 7(3):535–547. Saar Kuzi, Anna Shtok, and Oren Kurland. 2016. Query expansion using word embeddings. In *Proceedings* of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, pages 1929–1932. ACM. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Michael Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *Cornell University - arXiv*. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5755–5772, Dublin, Ireland. Association for Computational Linguistics. Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. 2019. From doc2query to doctttttquery. *Online preprint*, 6. Ramith Padaki, Zhuyun Dai, and Jamie Callan. 2020. Rethinking query expansion for bert reranking. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part II 42, pages 297–304. Springer. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Yuqi Qin, Pengfei Wang, Biyu Ma, and Zhe Zhang. 2022. A multi-interest evolution story: Applying psychology in query-based recommendation for inferring customer intention. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 1655–1665. Kaushik Rangadurai, Yiqun Liu, Siddarth Malreddy, Xiaoyi Liu, Piyush Maheshwari, Vishwanath Sangale, and Fedor Borisyuk. 2022. Nxtpost: User to post recommendations in facebook groups. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3792–3800. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. *Foundations and Trends in Information Retrieval*, 3:333–389. Guy D Rosin, Ido Guy, and Kira Radinsky. 2021. Eventdriven query expansion. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 391–399. Krishna Srinivasan, Karthik Raman, Anupam Samanta, Lingrui Liao, Luca Bertelli, and Mike Bendersky. 2022. Quill: Query intent with large language models using retrieval augmentation and multi-stage distillation. *ArXiv preprint*, abs/2210.15718. Zhongkai Sun, Sixing Lu, Chengyuan Ma, Xiaohu Liu, and Edward Guo. 2022. Query expansion and entity weighting for query reformulation retrieval in voice assistant systems. In *WSDM 2022*. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. *Journal of machine* learning research, 9(11). Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In *9th International Conference on Learning* Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Hamed Zamani and W Bruce Croft. 2016. Embeddingbased query language models. In *Proceedings of the* 2016 ACM international conference on the theory of information retrieval, pages 147–156. Hamed Zamani and W. Bruce Croft. 2017. Relevancebased word embedding. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, August 7-11, 2017, pages 505–514. ACM. Juan Zamora, Marcelo Mendoza, and Héctor Allende. 2014. Query intent detection based on query log mining. *J. Web Eng.*, 13(1&2):24–52. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2020a. Query understanding via intent description generation. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 1823–1832. ACM. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Lixin Zou, Shengqiang Zhang, Hengyi Cai, Dehong Ma, Suqi Cheng, Shuaiqiang Wang, Daiting Shi, Zhicong Cheng, and Dawei Yin. 2021. Pre-trained language model based ranking in baidu search. In *Proceedings* of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 4014–4022. ## A Characteristics Of News Headlines News headlines, designed to catch the reader's attention and highlight the editor's perspective, may contain redundant information beyond the event itself. In addition, they often use irregular grammar for the sake of memorability, and some words require extensive knowledge to comprehend. We provided several examples in Figure 5, which clearly demonstrate the necessity of using generation models to reformulate them before using them as query expansion. ## B Title2Eventphrase Construction Data Collection. We collect a broad range of Chinese web pages from January to March 2022, using web crawler logs from Tencent QQ Browser Search, and choose a reliable business tool to identify web pages that mention events (primarily from news websites). Following this, we extract the titles of the chosen web pages and automatically label them with our predefined topics. Any titles that contain toxic content are removed. To ensure a diverse range of events, we conduct data sampling every ten days during the crawling period. This reduces the frequency of events that belong to the most commonly occurring topics, resulting in a more balanced distribution of topics. In total, we collected over 100,000 instances. Annotation Standard. We summarize some essential parts of our annotation criteria. Our goal is to obtain clear and concise event descriptions from the titles and to extract the most chief (core) events from titles that contain multiple events. To achieve this, we have outlined some important specifications below: 1) Our definition of an event is a real-world behavior or change in state. It is worth noting that statements like policy notices or subjective opinions are not considered events. Furthermore, if a title is unclear or contains multiple unrelated events (e.g. news roundup), it should be flagged as "invalid" by annotators. 2) We have specified some rules to clarify the labeling of event phrases: a) definite (noninterrogative), fluent (good readability) description of the event in the title; b) consistent with the fact described in the title; c) if there is a progressive relationship between multiple events in the title, they need to be included to ensure the integrity of the information; d) quantifiers, gerunds, and complements need to be removed if they do not affect the understanding of the event, otherwise should be retained. Crowdsourced Annotation. We cooperate with crowdsourcing companies to hire human annotators. After multi-rounds of training in three weeks, 27 annotators are selected. We pay them ¥0.3 per instance. Meanwhile, four experts participated in two rounds of annotation checking for quality control. For each instance, a human annotator is asked to write the core event phase independently. To reduce the annotation difficulty, we provide a reference output along with the raw title. In the beginning, the reference output is mined from the query in click-log. After 10,000 labeled instances are collected, we train a better BART model for the rest of the annotation process. Also, we allow the annotators to refer to search engines to acquire domain knowledge. The crowd-sourced annotation is conducted in batches with two rounds of quality checking before being integrated into the final version of our dataset. Two rounds Checking. Each time the crowdsourced annotation of a batch is completed, it is sent to four experts to check whether they meet the requirements of our annotation standard. Instances that do not pass the quality check will be sent back for revision, attached with the specific reasons for rejection. This process repeats until the acceptance rate reaches 90%. Then, the current batch is sent to the authors for dual-check. The authors will randomly check 30% of the instances and send unqualified instances back to the experts along with the reasons for rejection. Slight adjustments on annotation standards also take place in this phase. This process repeats until the acceptance rate reaches 95%. ## C Title2Eventphrase Analysis Overview. Table 5 shows the overview of the Title2EventPhrase dataset, including data size, topic numbers, and the average length of titles and events. | Attribute | Value | |--------------------|---------| | Data Size | 41351 | | Number of Topics | 25 | | Avg. Len. of Title | 25.85 | | Avg. Len. of Event | 16.68 | Table 5: The overall statistics of Title2EventPhrase. ![10_image_0.png](10_image_0.png) From ¥29,900, Chery QQ Icecream Officially Launched.) | Challenges | Examples | Description | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|---------------| | 冰雪之上的新活力!长春冰雪新天地跨年演唱会拉开 帷幕 (New vitality above ice and snow! Changchun Ice and Snow Xintiandi New Year's Eve Concert kicks off.) The first sentence provides a commentary on the event, aimed at capturing the reader's interest, but it does not directly convey the essential details of the event. 对标宏光 MINI EV!售价2.99万起,奇瑞QQ冰淇淋 正式上市 (Benchmarking Against Wuling Hongguang Mini EV! Priced From ¥29,900, Chery QQ Icecream Officially Launched.) The subject of the predicate "对标 (Benchmarking Against)", is omitted since it appears at the next sentence. "打鸟(Score Birdie)" is a term in golf but can be literally interpreted as "hit bird" in Chinese. Domain knowledge is needed (Poulter Rushes to Score Birdie, Finishing the Match on 17.) to properly identify the predicate. | | | | Filter Type | Examples | Description | |--------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------| | Non-event Headline | iphone13为啥好,请看介绍 (Why is iPhone 13 good? Please see the introduction.) | This sentence does not contain event-related information. | | Missing Event | 最新消息,已全部删除 (Latest news, all deleted.) | This sentence lacks the subject of the event. | | Components | 早报|起亚高管怒斥比亚迪;居民存款破纪 录;驾校教练撞脸杨洋走红... (Early News: Kia Exec Heavily Criticizes BYD; Recordbreaking Bank Savings from Residents; Driving School Coach Goes Viral After Resembling Yang Yang...) | This sentence contains three different events, all of which pose a great challenge to downstream tasks. | | Numerous Events Irregular Syntax | 黎明觉醒突然 (Dawn awakens suddenly) | This sentence is incomplete, lacking a verb and an object. | | 孩子近视危害大,如何才能有效预防? | The sentence is a question and does not contain any | | | Interrogative Sentence (Near-sightedness in children is a major hazard. How can it objective. be effectively prevented?) | | | Topic Distribution. Table 6 lists 25 topics with their respective numbers and proportions. The distribution of topics in the dataset is obviously longtailed. The largest number of topics is Society, whose proportion exceeds 31%, while 11 topics account for less than 1%. Challenge Distribution. In this section, We analyze the scale of observed challenges described in Figure 5. We randomly select 1,000 headlines and manually annotate which challenge type it belongs to. The annotation result shows that 27% of sampled headlines have redundant expressions, 12% of them suffer from irregular grammar issues, and 11% of them require domain knowledge for sentence understanding. ## D Event Representation Analysis As mentioned in Section 2.3, our baseline model suffers from the issue of representation space degradation, leading to poor generation and retrieval performance. In Figure 7, we present 2-dimensional t-SNE vi- ![10_image_1.png](10_image_1.png) sualizations of the representation space obtained from queries without and with contrastive learning. As demonstrated in the left part of the figure, without contrastive learning, the model encodes queries into a smaller space with more collapses. As a comparison, the addition of contrastive learning expands the embedding space with better alignment and uniformity. | Topic | Count | Proportion | |-----------------------|---------|--------------| | 社会 (Society) | 12985 | 31.40% | | 财经 (Finance) | 5877 | 14.21% | | 体育 (Sports) | 4504 | 10.89% | | 时事 (Current Events) | 4078 | 9.86% | | 科技 (Technology) | 2698 | 6.52% | | 娱乐 (Entertainment) | 1903 | 4.60% | | 教育 (Education) | 1415 | 3.42% | | 天气 (Weather) | 1307 | 3.16% | | 汽车 (Cars) | 1255 | 3.03% | | 军事 (Military) | 738 | 1.78% | | 房产 (Real Estate) | 614 | 1.48% | | 旅游 (Travel) | 597 | 1.44% | | 三农 (Agriculture) | 546 | 1.32% | | 文化 (Culture) | 435 | 1.05% | | 游戏 (Games) | 365 | 0.88% | | 综艺 (Variety Shows) | 363 | 0.88% | | 电影 (Movies) | 324 | 0.78% | | 健康 (Health) | 314 | 0.76% | | 电视剧 (TV Series) | 210 | 0.51% | | 历史 (History) | 202 | 0.49% | | 科学 (Science) | 150 | 0.36% | | 音乐 (Music) | 150 | 0.36% | | 生活 (Life) | 116 | 0.28% | | 美食 (Food) | 103 | 0.25% | | 情感 (Sentiment) | 102 | 0.25% | | Total | 41351 | 100% | Table 6: The topics in Title2EventPhrase with their numbers and proportions of instances. ## E Event Filter Rules In this stage, we introduce the filtering criteria for headlines in the coarse filter phase. Five types of headlines will be excluded, namely: non-event headlines, missing important components related to the event, containing multiple events, irregular syntax, and interrogative sentences. We provide examples of each of these five types in Figure 6.
gong-etal-2023-transferable
Transferable and Efficient: Unifying Dynamic Multi-Domain Product Categorization
https://aclanthology.org/2023.acl-industry.46
As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plug-in modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of meta concepts, which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselineson three dynamic multi-domain product categorization (DMPC) tasks,and online experiments reconfirm its efficacy with a5{\%} increase on seasonal purchase revenue. Related datasets will be released.
# Transferable And Efficient: Unifying Dynamic Multi-Domain Product Categorization Shansan Gong∗1, Zelin Zhou∗2, Shuo Wang3**, Fengjiao Chen**4, Xiujie Song5, Xuezhi Cao6, Yunsen Xian7, **Kenny Q. Zhu**†8 1,2,5,8Shanghai Jiao Tong University, Shanghai, China 3,4,6,7Meituan Inc. China 1gongshansan@sjtu.edu.cn, 2ze-lin@sjtu.edu.cn, 3wangshuo81@meituan.com, 4chenfengjiao02@meituan.com, 5xiujiesong@sjtu.edu.cn, 6caoxuezhi@meituan.com, 7xianyunsen@meituan.com, 8kzhu@cs.sjtu.edu.cn ## Abstract As e-commerce platforms develop different business lines, a special but challenging product categorization scenario emerges, where there are multiple domain-specific category taxonomies and each of them evolves dynamically over time. In order to unify the categorization process and ensure efficiency, we propose a two-stage taxonomy-agnostic framework that relies solely on calculating the semantic relatedness between product titles and category names in the vector space. To further enhance domain transferability and better exploit cross-domain data, we design two plugin modules: a heuristic mapping scorer and a pretrained contrastive ranking module with the help of "meta concepts", which represent keyword knowledge shared across domains. Comprehensive offline experiments show that our method outperforms strong baselines on three dynamic multi-domain product categorization (DMPC) tasks, and online experiments reconfirm its efficacy with a 5% increase on seasonal purchase revenue. Related datasets are released1. ## 1 Introduction Product categorization (Ding et al., 2002) is a specialized text classification task that classifies product titles or descriptions into a pre-defined taxonomy of categories. As businesses expand, major e-commerce platforms (*e.g.,* Amazon and Alibaba) are encountering increasingly complex scenarios, where there are multiple domain-specific category taxonomies and each of them evolves dynamically ∗ Equal contribution. † Corresponding author. 1Datasets associated with this paper are released at https: //github.com/ze-lin/TaLR. over time. We define it as Dynamic Multi-Domain Product C*ategorization* (DMPC), which simultaneously considers the following **multi-domain** taxonomies and **taxonomy evolving** challenges. In real-world businesses, e-commerce platforms usually maintain **multiple** business lines with relatively independent **taxonomies**. These business lines are catering for different customer demands or specific domain applications, for example, one provides express delivery while another specializes in low-price bargains. Multiple business domains correspond to different category taxonomy structures, with various depths and distinct literal expressions of category names. Conventional industry approaches train separate classifiers on each domain, which under-utilize the cross-domain data and their shared knowledge while raising the expenses of maintenance. Meanwhile, with the expansion and reorganization of businesses, each category **taxonomy** keeps **evolving** as well, where old categories might be deleted or integrated and new categories are possibly added. Conventional multi-class classifiers need to be re-trained every time taxonomy changes, which disrupts the operation and further diminishes the maintenance efficiency. To mitigate **taxonomy evolving** issues, intuitively, we reformulate the canonical text classification problem as a text relevance matching problem. Moreover, to ensure both accuracy and online efficiency, we propose a two-stage *Taxonomy-agnostic* Label Retrieval (TaLR) framework (see Figure 1) capturing semantic similarity between a product title and its corresponding category names in the vector space, where candidate categories are first retrieved and then reranked for the final prediction. This reformulation is especially beneficial for evolving and newly added (zero-shot) categories as textual semantics are incorporated. To leverage cross-domain data in **multi-domain** taxonomies challenge, we devise two plug-in modules in both stages to enhance TaLR's domain transferability. These modules are centralized with "meta concepts" that appear in the product titles, which represent fine-grained keyword knowledge shared across domains (Appendix B). As is shown in Figure 1, in the *retrieval* stage, besides the dense retrieval based on vector similarity (dense scorer), the statistical co-occurrence probability between meta concepts and category labels are exploited as well (mapping scorer) . In the *reranking* stage, meta concepts are incorporated with category labels as supervision signals for the contrastive pretraining of the scoring model (matching scorer). While the mapping scorer complements the superficial semantic dense retrieval with cross-domain commonsense knowledge, contrastive pretraining directly optimizes the vector space improving interdomain alignment and uniformity. Details are given in Section 2.2 and Section 2.4. In summary, our contributions are: (1) For the first time, we address the DMPC problem and release the corresponding multi-domain datasets in Chinese. (2) We propose a unified TaLR framework equipped with two well-designed plug-in modules empowered with meta concepts, which is robust and efficient against the two challenges in DMPC problem. (3) Offline experiments on our annotated real-world DMPC datasets show TaLR's ability to effectively transfer knowledge across domains and generalize to new domains. The unified TaLR outperforms three separately-trained SOTA classifiers by 1.65% on overall accuracy and maintains satisfactory accuracy in taxonomy evolving conditions. Online experiments reaffirm its efficacy with a 5% increase in seasonal purchase revenue. ## 2 Proposed Framework ∀i ∈ [1, n] domains, given a taxonomy Gi with depth of di and m leaf nodes, the path from root to leaf node forms the text which is regarded as hierarchical category label y (j) i(j ∈ [1, m]). For an input product title Xi along with its meta concept labels {λk}, our task is to output the correct category label it belongs to. Note that only one leaf category will be the correct answer. Detailed task formulation refers to Appendix A. Our TaLR framework is structured into two stages: *Retrieval* and *Reranking*, as illustrated in Figure 1. We will zoom into each component of this framework. ## 2.1 Dense Scorer We first train a dual-encoder to represent both categories and product titles in the vector space. Negative sampling In the original text classification problem, each product title Xi has exactly one positive category label yi. However in our reformulation, text relevance matching models need negative category labels during training, otherwise they would not succesfully converge. For each (Xi, yi) pair, we prepare to construct the training examples S from multiple taxonomies by sampling (N − 1) negative categories. Instead of randomly chosen, "hard" negative examples are more informative for better convergence. Inspired by teacher-student paradigm (Hinton et al., 2015), we adopt a teacher classifier-based sampling strategy to sample strong negative categories for dual-encoder learning. For each training dataset Si of taxonomy Gi, we split it in k-fold manner, then take turns to train k BERT classifiers on every k−1 kdata , with the remain 1k data as the development set. The m-class classifiers are optimized with the typical m-class cross-entropy loss. The k classifiers would inference (N −1) most possible but not correct category labels concurrently in their corresponding development sets, and their results with ground truth positive labels constitutes the point-wise training set for the following dual-encoder training. Dual-encoder training We adopt a siamese network architecture (Reimers and Gurevych, 2019) where the encoder respectively extracts the fixedsized embeddings of product titles Xi and category names yˆi which are denoted as ux and vy . To better align the embedding of ux and vy, we use Circle Loss (Sun et al., 2020) which allows each similarity score to optimize at its own pace. We simplify it as: $${\mathcal{L}}=\log\left(1+\sum_{S}e^{\alpha(c o s({\bf u}_{x}^{+},{\bf v}_{y}^{+})-c o s({\bf u}_{x}^{-},{\bf v}_{y}^{-}))}\right),$$ where α is the hyper-parameter, and +, − denotes the positive and negative samples in S respectively. We also compare this loss function with other alternatives in Appendix D.1. Candidates retrieval We can quickly derive relevant category label embeddings given an incoming ![2_image_0.png](2_image_0.png) product title embedding, with one-vs-all similarity measurement like cosine-similarity implemented by Approximate Nearest Neighbor (ANN) techniques targeting time efficiency. Based on this, we can readily collect top-k candidate list Cvec. ## 2.2 Mapping Scorer Dense scorer usually prioritizes semantic relatedness of literal expressions, neglecting the commonsense co-occurrence probability that lies within cross-domain training data. For example, "Sunrise Roses 500g" is often recognized as [Flower] by semantic matching algorithms, however, it is actually a variety of [Grape]. Therefore we introduce a mapping scorer in *Retrieval* stage capturing such commonsense knowledge to complement the above dense-retrieved candidates. Mapping algorithm The shared meta concept set M is constructed by hybrid NER-related techniques. Details are in Appendix B. We can regard "meta concept" as a kind of keyword knowledge because they usually contain very concrete and accurate information. In our released datasets, one product title X is tagged with one or more meta concepts Λ = {λ1, λ2*, ...λ*k} from M. For example, "Haagen-Dazs Red Wine Flavor Ice Cream" is tagged with ⟨RedWine⟩, ⟨Icecream⟩, ⟨HaagenDazs⟩ as meta concepts. Given product title X and a category label yˆ, our heuristic strategy establishes X → yˆ mapping as conditional co-occurrence probability P(ˆy|X). First, we model this conditional probability for each category yˆ as: $$\begin{array}{l}P(\hat{y}|X)=P(\hat{y}|\lambda_{1},\lambda_{2},...\lambda_{k})\\ =\max\limits_{1\leq i\leq k}P(\hat{y}|\lambda_{i}).\end{array}\tag{2}$$ Here we aggregate $P(\hat{y}|\lambda_{1},\lambda_{2},...\lambda_{k})$ with the max imum value among multiple λi referring to the same yˆ. Each P(ˆy|λi) is collected from training data distributions: $$P({\hat{y}}|\lambda_{i})={\frac{P({\hat{y}},\lambda_{i})}{P(\lambda_{i})}}={\frac{\nu({\hat{y}},\lambda_{i})}{\nu(\lambda_{i})}},\qquad(3)$$ where ν denotes the frequency in training data. Then, we collect candidate list C*rule* by empirically setting a threshold of P(ˆy|X) > 0.5 to ensure both retrieval quantity and quality. Candidates merging When retrieved candidates from the dense scorer and mapping scorer are prepared, we need to combine the two lists of candidates. Our concept-first strategy prioritizes candidates from C*rule*. It puts at most 10 top candidates (usually less than 10) from C*rule* into C*union*, then keeps filling it with top candidates from Cvec until its size reaches 10. ## 2.3 Matching Scorer To further measure the relatedness of product titles and category names with mutual interactions, we train a matching scorer in *Reranking* stage. During training, given a product title X and its retrieved candidates Cunion = {c1, c2*, ...c*l}, we concatenate tokenized sequences of X and each of these ci ∈ C*union* with a [SEP] token as the input to BERT-based model. The ground truth label is 1 if ci is the correct candidate otherwise 0. Optimization is followed with binary cross-entropy loss. During inference, the model gives similarity scores for each (X, ci) pair, and the candidate with the highest similarity score would be our predicted category. ## 2.4 Contrastive Pretraining For **multi-domain taxonomies**, category classes vary from one taxonomy to another. Despite the assorted expressions of category classes among different domain taxonomies, we find their finegrained concepts of products seldom shift. While previous retrieval stage pursues the recall of candidates and focuses less on class discrimination, the cross-encoder in *Reranking* stage possibly suffers from indistinguishable categories. Inspired by the supervised derivative of contrastive learning (Wang et al., 2021), we restrict the formation of positive pairs ensuring they not only share the same category class with X but also have at least one meta concept in common with X, otherwise they would be considered negative. This setting is tailored for the **multi-domain taxonomies** challenge pursuing cross-domain alignment and uniformity, where inter-concept semantics are tied closer and intraconcept ones are further distinguished. Given a product title X with label y and tagged meta concept set Λ, we encode X as vector u and group encoded product titles as positive vector samples {v y,Λ1 1, v y,Λ2 2*, ...,* v y,ΛD D }, which are labeled with the same y and share an overlapped concept set Λd with Λ. We use BERT as the encoder backbone and tune its parameters with group contrast loss: $$\begin{split}\mathcal{L}^{G C}&=-\frac{1}{D}\sum_{d=1}^{D}\log\frac{\exp(\mathbf{u}\cdot\mathbf{v}_{d}^{y,\Lambda_{d}}/\tau)}{P o s+N e g}.\\ &P o s=\sum_{d=1}^{D}\exp(\mathbf{u}\cdot\mathbf{v}_{d}^{y,\Lambda_{d}}/\tau),\\ &N e g=\sum_{y^{\prime},\Lambda^{\prime}}^{D}\exp(\mathbf{u}\cdot\mathbf{v}^{y^{\prime},\Lambda^{\prime}}/\tau),\end{split}\tag{4}$$ where y′,Λ′ denotes samples with either different label y′ with y or non-overlapping meta concept set Λ′ with Λ. The BERT model after contrastive pretraining can be used in matching scorer during Reranking stage in Section 2.3. ## 3 Dynamic Multi-Domain Datasets 3.1 Static Multi-Domain Datasets We select 3 business lines from our e-commerce platform: QuickDelivery (QD, targeting fast delivery), BargainHunters (BH, targeting low price), FreshGrocery (FG, targeting fresh vegetables). These data instances are collected from the realworld business, where the product titles are mostly assigned by sellers from the platform and the category labels stem from three pre-defined business taxonomies. We recruit experienced annotators to manually classify the products Xiinto assorted categories yi, with 1% sampling to guarantee annotation accuracy. Data groups with over 95% accuracy in quality checking are used in our final datasets. Meanwhile, Xiis tagged with concepts {λk} following the Appendix B. | Dataset | # training | # test | # classes | depth | |---------------------------------------------|--------------|----------|-------------|---------| | QD | 99k | 11k | 1987 | 3 | | BH | 31k | 5k | 2632 | 4 | | FG | 28k | 3k | 1065 | 4 | | 1 # classes: the total distinct leaf nodes. | | | | | Table 1: Statistics for multi-domain datasets Statistics of three datasets are listed in Table 1. Each sample in the three datasets has exactly one ground truth category. Varied class numbers and hierarchy depths of different taxonomies pose bigger challenges for multi-domain knowledge sharing. ## 3.2 Dynamic Test Set To verify the generalizability of TaLR on zeroshot scenarios, we further construct two **taxonomy evolving** derivatives of the QD test set. (ii) QD-*integrate*: During a production business adjustment, 127 classes in the original taxonomy are integrated or replaced by similar categories, which affects 1371 samples in the original test set to form this subset. (i) QD-*divide*: 22 category nodes from the original QD taxonomy are divided into two or more nodes. 495 samples in the original test set suffer from this evolution. ## 3.3 Meta Concept Set Beyond the category labels, each product title is associated with a list of meta concepts from a set M including over 30k entities covering the most fine-grained concepts in product titles. The tagging step X → {λ1, λ2*, ...λ*k} is accomplished by an industrial Label Tagging System that exploits heterogeneous approaches. Details are in Appendix B. ## 4 Experiments In this section, we discuss experimental results under static multi-domain settings and dynamic (taxonomy evolving & new taxonomy) conditions. A brief comparison of time efficiency between TaLR and simple *Reranking* is also included. ## 4.1 Baselines We implement several baseline methods based on single-domain, multi-domain, and dynamic scenarios. To ensure fair comparisons, we also experiment concatenating product titles with meta concept text as input for some competitive baselines. Note that all the strong baselines are practicable in our online production environment, and those with unbearable space or time complexity are not considered. Works holding different assumptions (e.g. necessitate multi-label or not support Chinese) with us are not considered either. Finally, we deploy and benchmark the following common baselines: Flat Classifier **TF-IDF&LR** represents product titles with TF-IDF weighted dense vectors, and executes classification with Logistic Regression. FastText (Bojanowski et al., 2017) is a common baseline adopted in online product categorization challenges. **BERT** classifier is used as the strong baseline in both single-domain and multi-domain (trained with multi-task learning) settings. Hierarchical Classifier **HMCN** (Wehrmann et al., 2018) and **HiMatch** (Chen et al., 2021) leverage hierarchical information from taxonomy to guide the classification process, and we use BERT as a text encoder in both approaches. **XRLinear** and **XR-Transformer** are two derivatives of PECOS (Yu et al., 2022) framework for extreme classification, which achieve competitive performance in most open product categorization datasets. ## 4.2 Experimental Setup We mix up training data from three datasets to train the unified TaLR. We use **accuracy** score as the evaluation metric to meet real-world business demands. Accuracy mathematically equals to **MicroF1** score in a single-label multi-class classification problem. More details can be found in Appendix C. ## 4.3 Overall Results The overall accuracy score is shown in Table 2. Since traditional single-domain approaches cannot tackle **multi-domain taxonomies**, we train **separate** models on each business respectively. Among methods targeting one static taxonomy, hierarchical classifiers generally perform better than flat classifiers with the aid of taxonomy structure information. However, because these methods can only handle one static taxonomy, they not only suffer from efforts to maintain different models for | Methods | Overall | QD | BH | FG | |------------------------------------------------|-----------|--------|--------|--------| | Separate models | | | | | | TF-IDF&LR | 69.51 | 69.93 | 68.23 | 69.95 | | FastText | 74.62 | 74.01 | 71.68 | 80.82 | | BERT | 83.49 | 84.82 | 79.93∗ | 84.23 | | BERT+♠ | 83.01 | 86.45 | 79.02 | 75.32 | | HMCN-F-BERT | 82.14 | 83.72 | 77.09 | 84.25 | | HiMatch-BERT | 84.08 | 86.12 | 77.38 | 84.19 | | HiMatch-BERT+♠ | 83.75 | 87.26∗ | 77.26 | 78.53 | | XR-Linear | 76.57 | 75.27 | 77.91 | 78.95 | | XR-Transformer | 84.58∗ | 79.74 | 79.23 | 84.58∗ | | XR-Transformer+♠ | 81.45 | 85.34 | 74.59 | 78.53 | | (a): TaLR | 85.90 | 87.88 | 81.92 | 85.09 | | Unified model | | | | | | BERT Multi-task | 68.00 | 80.27 | 50.28 | 44.29 | | BERT Multi-task+♠ | 67.79 | 81.37 | 49.77 | 39.83 | | (b): TaLR | 86.23 | 88.16 | 82.48 | 85.25 | | TaLR ablation test | | | | | | (c): (b) (-) CL | 85.26 | 86.83 | 81.75 | 85.13 | | (d): (b) (-) MS | 84.63 | 86.59 | 80.13 | 84.71 | | (e): (b) (-) CL&MS | 82.82 | 83.85 | 79.15 | 84.71 | | (f): (b) (-) CL&MS +♠ | 84.38 | 87.43 | 80.64 | 79.77 | | ♠ concatenate concept text after product title | | | | | each domain but also fail to leverage multi-domain data. While the multi-task BERT is able to train and infer on three domains within one model, it performs even worse than TF-IDF&LR on BH and FG. One possible reason is that the multi-task approach relies heavily on the weighting of losses, and if the task-specific training data distribution varies significantly, one task might dominate the joint distribution and constrain the optimization of other tasks. Simply concatenating meta concepts to titles does not always take effect, and this is expected since concatenated tokens implicitly contribute to the joint representation of one sentence (e.g. self-attention in transformer), which proves to be inferior to our explicit usage of statistical mapping and contrastive grouping. For our proposed framework TaLR, variant (a) already outperforms other baselines in separate model training paradigm, while TaLR (b) further achieves even higher accuracy when jointly trained on the mixed multi-domain data where the multitask BERT fails, verifying TaLR's efficacy on multi-domain taxonomies. We assume that the measurement of semantic relatedness is transferable on either business domain, and their shared knowledge could be integrated via contrastive pretraining as well. Therefore, the unified training helps improving the performance on each respective domain instead of conflicting each other as BERT multi-task does. From the ablation tests, we can observe the effectiveness of the two plug-in modules in our TaLR framework from row (c) and (d), and the contribution of these two modules are orthogonal. Removing the mapping scorer in (d) drops the overall accuracy most, while removing contrastive pretraining in (c) results in its inferior performance than (a) as well. This indicates both modules are indispensable for the enhancement of exploiting multi-domain data. From (e)→(f), concatenating meta concepts somehow improves the overall performance, but (f) still loses to (b). This reaffirms our above assumption that our usage of meta concepts is superior to simple concatenation. To further analyze the effects of the two plug-in modules, we conduct Case Study in Appendix D.2. ## 4.4 Time Consumption To meet online deployment requirement, the inference time consumption (seconds cost for each instance) needs to be considered. We compare TaLR with the vanilla model (single BERT cross-encoder) on the three datasets in Figure 2. On the one hand, the inference speed of TaLR is much faster (4 times faster for FG and 10 times faster for BH) than vanilla model owing to the *Retrieval* stage. On the other hand, the time consumption per item of TaLR increases almost linearly along with the number of classes, while for vanilla model the overhead grows more sharply, revealing the time efficiency of TaLR when the class number scales up. ![5_image_0.png](5_image_0.png) ## 4.5 Dynamic Test Set Experiment In order to evaluate the ability of our framework on **taxonomy evolving** challenge, we use TaLR trained on the original multi-domain datasets to directly infer on two dynamic test sets. The vanilla BERT without any finetuning is a naive baseline BERT-matching. The BERT fine-tuned with fewshot new data (1%) is a strong baseline **BERTfew-shot**. Here "before" denotes the subset from the original test set and "after" denotes the subset with the same product titles but evolved categories. From the listed accuracy "before" and "after" taxonomy evolving in Table 3, we can conclude that TaLR sustains satisfactory accuracy compared with its strong counterpart trained with 1% extra data. Table 3: The accuracy on two dynamic test sets. ∆ is the change of accuracy after evolving. The best "after" scores and least drop ∆ are bolded. | Methods | QD−divide | QD−integrate | | | |---------------|--------------------|--------------------|-------|-------------| | Before After | ∆ | Before After | ∆ | | | BERT-matching | 6.66 | 11.95 +5.29 | 13.39 | 2.23 -11.16 | | BERT-few-shot | 90.51 43.54 -46.96 | 86.79 50.16 -36.53 | | | | TaLR | 90.11 69.71 -20.40 | 85.20 81.48 -3.72 | | | ## 4.6 Extrapolating Results On New Taxonomy Consider an extreme **taxonomy evolving** condition when a new business line emerges, a robust model is supposed to categorize incoming products based on the brand-new taxonomy. | Methods | QD | BH | FG | |--------------------|-------|-------|-------| | BERT-matching | 9.00 | 11.23 | 4.03 | | BERT-few-shot | 43.29 | 35.19 | 29.80 | | TaLR | 60.57 | 65.45 | 62.69 | | (-) contrastive | 56.71 | 64.99 | 60.79 | | (-) mapping scorer | 56.25 | 64.65 | 59.29 | Table 4: The accuracy of TaLR on the new taxonomy. We deploy our experiments in a zero-shot manner, where we take turns to train TaLR on either two business data and test its performance on the remaining business. TaLR still outperforms BERTfew-shot. This shows TaLR's preeminent transferability with the reformulation of textual semantic matching, which helps improving user experience in this cold-start scenario. Each component in the ablation test verifies its effectiveness as well. ## 4.7 Online Experiment We conduct online experiments on one downstream task where TaLR's domain-independent category recognition ability helps transfer user preferences from other domains and contributes to a more accurate recommendation. When TaLR is incorporated in the recommendation system, customer seasonal purchase revenue increases significantly over 5%. ## 5 Conclusion To tackle DMPC problem, we propose a unified TaLR framework with two plug-in modules empowered with cross-domain meta concepts. With comprehensive experiments on real-world DMPC datasets, results under both multi-domain and taxonomy evolving conditions exhibit the transferability and maintenance efficiency of TaLR. ## Acknowledgments This work was generously supported by the Meituan-SJTU joint research grant. ## References Rohit Babbar and Bernhard Schölkopf. 2017. Dismec: Distributed sparse machines for extreme multi-label classification. In *Proceedings of the tenth ACM international conference on web search and data mining*, pages 721–729. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. *Transactions of the association for computational linguistics*, 5:135–146. Haibin Chen, Qianli Ma, Zhenxi Lin, and Jiangyue Yan. 2021. Hierarchy-aware label semantics matching network for hierarchical text classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4370–4379. Hongshen Chen, Jiashu Zhao, and Dawei Yin. 2019. Fine-grained product categorization in e-commerce. In *Proceedings of the 28th ACM International Conference on Information and Knowledge Management*, pages 2349–2352. Pradipto Das, Yandi Xia, Aaron Levine, Giuseppe Di Fabbrizio, and Ankur Datta. 2016. Large-scale taxonomy categorization for noisy product listings. In 2016 IEEE international conference on big data (big data), pages 3885–3894. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ying Ding, M Korotkiy, Borys Omelayenko, V Kartseva, V Zykov, Michel Klein, Ellen Schulten, and Dieter Fensel. 2002. Goldenbullet: Automated classification of product data in e-commerce. In Proceedings of the 5th international conference on business information systems. Citeseer. Xin Luna Dong, Xiang He, Andrey Kan, Xian Li, Yan Liang, Jun Ma, Yifan Ethan Xu, Chenwei Zhang, Tong Zhao, Gabriel Blanco Saldana, et al. 2020. Autoknow: Self-driving knowledge collection for products of thousands of types. In *Proceedings of the 26th* ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2724–2734. Jung-Woo Ha, Hyuna Pyo, and Jeonghee Kim. 2016. Large-scale item categorization in e-commerce using multiple recurrent neural networks. In *Proceedings* of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 107–115. Idan Hasson, Slava Novgorodov, Gilad Fuchs, and Yoni Acriche. 2021. Category recognition in e-commerce using sequence-to-sequence hierarchical classification. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 902–905. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. *arXiv* preprint arXiv:1503.02531. Abhinandan Krishnan and Abilash Amarthaluri. 2019. Large scale product categorization using structured and unstructured attributes. arXiv preprint arXiv:1903.04254. Younghun Lee, Lei Chen, Yandi Xia, and Wei-Te Chen. 2020. Cbb-fe, camembert and bit feature extraction for multimodal product classification and retrieval. Maggie Yundi Li, Stanley Kok, and Liling Tan. 2018. Don't classify, translate: Multi-level e-commerce product categorization via machine translation. arXiv preprint arXiv:1812.05774. Xusheng Luo, Luxin Liu, Yonghua Yang, Le Bo, Yuanpeng Cao, Jinghang Wu, Qiang Li, Keping Yang, and Kenny Q Zhu. 2020. Alicoco: Alibaba e-commerce cognitive concept net. In Proceedings of the 2020 ACM SIGMOD international conference on management of data, pages 313–327. Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry Artieres, George Paliouras, Eric Gaussier, Ion Androutsopoulos, Massih-Reza Amini, and Patrick Galinari. 2015. Lshtc: A benchmark for large-scale text classification. *arXiv preprint arXiv:1503.08581*. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992. Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. 2020. Circle loss: A unified perspective of pair similarity optimization. In *Proceedings of the IEEE/CVF* Conference on Computer Vision and Pattern Recognition, pages 6398–6407. Ximei Wang, Jinghan Gao, Mingsheng Long, and Jianmin Wang. 2021. Self-tuning for data-efficient deep learning. In International Conference on Machine Learning, pages 10738–10748. PMLR. Jonatas Wehrmann, Ricardo Cerri, and Rodrigo Barros. 2018. Hierarchical multi-label classification networks. In *International conference on machine learning*, pages 5075–5084. PMLR. Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. 2020. Product knowledge graph embedding for e-commerce. In *Proceedings of* the 13th international conference on web search and data mining, pages 672–680. Hu Xu, Bing Liu, Lei Shu, and P Yu. 2019. Open-world learning and application to product classification. In The World Wide Web Conference, pages 3413–3419. Li Yang, E Shijia, Shiyao Xu, and Yang Xiang. 2020. Bert with dynamic masked softmax and pseudo labeling for hierarchical product classification. In MWPD@ ISWC. Hsiang-Fu Yu, Chia-Hua Ho, Prakash Arunachalam, Manas Somaiya, and Chih-Jen Lin. 2012. Product title classification versus text classification. *Csie. Ntu.* Edu. Tw, pages 1–25. Hsiang-Fu Yu, Kai Zhong, Jiong Zhang, Wei-Cheng Chang, and Inderjit S Dhillon. 2022. Pecos: Prediction for enormous and correlated output spaces. Journal of Machine Learning Research. ## A Dynamic Multi-Domain Problem We clarify the DMPC problem as follows. Given a set G of n relatively independent label taxonomies at initial time t0 $$\{G_{1},G_{2},G_{3},...,G_{n}\},$$ each of which correlates with a domain-specific product categorization task. The taxonomy of product categories Giis tree-structured with depth di, and it contains mi category leaf nodes: $$\{y_{i}^{(1)},y_{i}^{(2)},y_{i}^{(3)},...,y_{i}^{(m_{i})}\}\subseteq G_{i}.$$ Part of the nodes is enrolling in a dynamic trending. As time goes t>0, the category node y (a) iof a certain product might be **divided** into two categories y (a1) iand y (a2) ior **integrated** with another category y (b) ito form y (ab) i. The **emergence** of a new category node y (m+1) i with corresponding product titles is also possible. In addition, an emerging taxonomy Gn+1 may sprout when a new business is cultivated. A single product categorization task on taxonomy Gi (i = 1) is a traditional classification task, in which the training data and test data are organized in tuples $${\mathcal{S}}=\{(X_{i}^{(1)},y_{i}^{(1)}),...,(X_{i}^{(m_{i})},y_{i}^{(m_{i})}),...\}.$$ Each Xiin S represents the title of one product and yiis the corresponding class node in the categorical taxonomy tree. In DMPC problem, when i ≥ 2, to unify the training data and the inference procedure cross Gi, we reformulate classification as the matching between Xi and yi. While traditional classifiers regard yi as meaningless label ordinals, we instead treat them along the path of top-bottom taxonomy nodes equivalently with the product title as free text. In this reformulated text semantic similarity matching task, the data samples are: $$\begin{array}{c}{{{\mathcal{S}}_{i}=\{(X_{i}^{(1)},y_{i}^{(1)},Y_{1}^{(1)}),...,}}}\\ {{{\qquad(X_{i}^{(m_{i})},y_{i}^{(m_{i})},Y_{1}^{(m_{i})}),...\},}}\\ {{{\qquad\qquad\mathcal{S}=\{{\mathcal{S}}_{1},{\mathcal{S}}_{2},...,{\mathcal{S}}_{i}\},}}\end{array}$$ where Y1 ∈ {0, 1} is an indicator denoting whether the text pair Xi and yiis matched (Y1 = 1) or not (Y1 = 0). ## B Details Of Meta Concept Set Construction And Tagging Meta concepts are fine-grained tags that have been widely used in industrial knowledge graphs (e.g. Amazon (Dong et al., 2020), Walmart (Xu et al., 2020), Alibaba (Luo et al., 2020)). Details of meta concept construction and tagging are listed below. We will use "concept" instead of "meta concept" for brevity. ## B.1 Concept Set Construction Concept set construction is conducted in a semisupervised manner. First, we use a domain-specific named entity recognition (NER) model to mine fine-grained entities from product titles. These entities are complemented with queries from search engine and cumulated knowledge from experts to form the initial pool of concepts. Based on that, we use a naive classifier to pick-up high-quality concepts with high search frequency or broad product coverage. Then, manual annotation is performed on the remaining 20k entities, achieving 95% accuracy in quality checking. Finally, we collect over 30k concepts covering the most fine-grained knowledge in product titles. ## B.2 Concept Tagging Concept tagging is comprised of two stages. The first stage is concept recall. In order to find candidate concepts for each product, we adopt three approaches: NER, knowledge reduction and semantic recall. First, seed candidates are found by NER on product titles. Second, we extend seed candidates with their neighbors in commonsense knowledge graphs, such as synonyms and brandconcept relations (some brands sell specific products). Third, for those products without seed candidates, we use Sentence-BERT to retrieve concepts by textual semantics. The low-quality concepts recalled will be filtered in the next stage, i.e. concept classification. The second stage is concept classification. Based on the candidates collected in the previos stage, we train a binary classifier to filter out concepts which attain low relevance score with product titles. The classifier is fine-tuned with knowledge integration which will be introduced in our successive work. ## C Implementation Details For fair comparisons, all the "BERT" abbreviations mentioned in this work are Google BERT-base pretrained on Chinese corpus. For TF-IDF and FastText baselines, We use jieba 2toolkit to generate Chinese word segments and tune hyper-parameters on each dataset respectively. BERT-related models are initialized from the pretrained Google BERTbase (Chinese) and tuned with 2e-5 learning rate, 512 batch size, 32 sequence length, except that the cross-encoder BERT in *Reranking* stage extends the sequence length to 64. All BERT related appoaches are trained 40 epochs while multi-task baseline trained at most 120 epochs. | Product title | Taxonomy path | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| | QD | | | Towel gourd 1 pcs | Vegetable → Mixed Product | | & soy bean 150g | → Vegetables mixture | | Concepts:{⟨soy bean⟩, ⟨towel gourd⟩} BH Fresh bamboo shoots Vegetable/Fruit → Vegetable (dig from mountains) → Tubers → Bamboo Concepts:{⟨bamboo shoot⟩, ⟨native product⟩} FG Butter leaf lettuce 100g Fresh → Vegetable → Leaf → Lettuce Concepts:{⟨lettuce⟩, ⟨butter lettuce⟩} | | Table 5: Examples from the three Datasets ## D Experiment Analysis D.1 Details Of Dense Scorer In *Retrieval* stage, it is encouraged to exploit the potential candidates as accurately as possible, otherwise the latter *Reranking* stage would never make right predictions if the true label is not covered by the retrieved candidates. Hence we use HR@k to measure the retrieval performance. We compare several alternatives of the loss function for Dense Scorer, specifically, different approaches for (ux, vy) similarity measurement. The loss used in Eq. (1) is termed as Cosent loss3. Besides this, one straightforward method is to compute the cosine similarity between vector ux and vy and optimize the model using vanilla binary cross entropy loss. $$\delta=c o s(\mathbf{u}_{x},\mathbf{v}_{y})={\frac{<\mathbf{u}_{x},\mathbf{v}_{y}>}{||\mathbf{u}_{x}||\,||\mathbf{v}_{y}||}},\qquad(5)$$ ## 1 Introduction The _Https://github.com/fxsjy/jieba This name is after https://kexue.fm/archives/8847 ![9_image_0.png](9_image_0.png) Figure 3: The retrieval results of the vector-based unit over different loss functions. $${\mathcal{L}}^{b c e}=-\sum_{S}Y_{1}\log(\delta)+(1-Y_{1})\log(1-\delta),\,\,(6)$$ where Y1 is the binary class. For the sake of the alignment between embedding ux and vy, we also refer to the classification objective function in SBERT (Reimers and Gurevych, 2019). $$\mathbf{\tau},\mathbf{v}_{y},|\mathbf{u}_{x}-\mathbf{v}_{y}|)),$$ o = *sof tmax*(Wo(ux, vy, |ux − vy|)), (7) where Wo ∈ R 3l×2is the weighting parameter to project the concatenation of ux, vy and the elementwise difference |ux − vy| to binary classes. l is the dimension of embeddings. The second element in vector o can be regarded as the probability whether ux and vy are matched or not, hence we can adopt the same binary cross entropy loss function in Eq. (6) to optimize the model. In Figure 3, as k goes on, the HR score increases, and the model trained with Cosent loss is consistently better than others, while the model trained with SBERT loss performs unstably, sometimes worse than Cosine loss. One explanation is that comparing with Cosine loss and SBERT loss, the Cosent loss focuses on the positive-versus-negative pairwise optimization, which means the model only cares for the relative order of the prediction results instead of the specific value. And this setting brings consistent recall of candidates. ## D.2 Case Study For product "New Farmer® walnut flavored sunflower seed 160g" which should be categorized into [Sunflower Seed], TaLR without contrastive learning wrongly assign it to [Walnuts]; When concept "sunflower seed" is incorporated in contrastive pretraining, TaLR is capable of distinguishing the right answer. For product "CELSIUS® *cola flavored 300ml"* which should belong to [Sports Drink], TaLR without mapping scorer wrongly label it as [Cola]; When concept "CELSIUS®" is engaged in retrieval, TaLR could finally sort out the answer. ## E Related Work E.1 Large-Scale Taxonomy Classification Text classification with a large hierarchy of classes attracts attention and has been studied with the evolving of LSHTC (Partalas et al., 2015) Challenge, which includes over 12000 categories. DiSMEC (Babbar and Schölkopf, 2017) devises onevs-all linear classifiers with explicit control of model size. HMCN (Wehrmann et al., 2018) discovers hierarchical information by jointly optimizing local and global loss functions. HiMatch (Chen et al., 2021) encodes the complex structure of the label hierarchy as well as the input text, to capture the text-label semantics relationship. PECOS (Yu et al., 2022) ranks output classes with hierarchical clustering, and the semantics of categories are incorporated as well. These methods assume that label taxonomies are stable, neglecting that taxonomy evolves gradually. ## E.2 Product Categorization Product categorization is a hierarchical text classification task assigning categories to product instances. Approaches in early times are centralized with text features and basic machine learning algorithms. (Ding et al., 2002) introduces KNN and Naive Bayes to the field of product categorization, while (Yu et al., 2012) conducts experiments using TFIDF with an SVM classifier. Restricted by the bagof-words paradigm, these methods lack the ability to represent text with contextual semantics. Neural network based methods prevail since 2013. (Ha et al., 2016) proposes an end-to-end deep learning model composed of multiple RNNs and fully-connected layers, which exhibits a significant advantage over traditional bag-of-words approaches. (Das et al., 2016) conducts a comparison between linear, CNN, and gradient boosting models. Multi-CNN and multi-LSTM are applied in (Krishnan and Amarthaluri, 2019) combining structured and unstructured attributes of products. (Chen et al., 2019) utilizes several convolutional approaches for a better representation of words, and they further adopt literal matching between product content and category label texts to deal with new categories. However, they do not consider the more complicated category *divide* situation. Recent studies follow the pretrain-finetune paradigm since the great success of BERT (Devlin et al., 2019). (Lee et al., 2020) uses the Camem- BERT pretrained on French corpus as text encoder in SIGIR 2020 Challenge. (Yang et al., 2020) exploits BERT with a dynamic masking strategy and achieves first place on the 2020 Semantic Web Challenge. Apart from end-to-end classification approaches, (Hasson et al., 2021; Li et al., 2018) adopts hierarchical Seq2seq models for product categorization. Nonetheless, their models need to be re-trained whenever category taxonomy vocabulary changes. ## E.3 Incremental Learning Class incremental learning resolves the problem that the classes increase progressively in a stream, and the classifier should continuously learn the incoming classes while sustaining accuracy on the seen classes as well. iCaRL (Rebuffi et al., 2017) is proposed to circumvent the catastrophic forgetting problem by storing the information of previous classes. (Xu et al., 2019) extends incremental learning as an open-world learning problem, where a model rejects unseen classes instead of assigning them into the seen class vocabulary. However, in their open-world learning setting, other taxonomy evolving situations (like split and merge) and multidomain taxonomies are not taken into consideration.
darm-etal-2023-discosqa
{DISCOSQA}: A Knowledge Base Question Answering System for Space Debris based on Program Induction
https://aclanthology.org/2023.acl-industry.47
Space program agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge Base (KB) databases are an effective way of storing and accessing such information to scale. In this work we present a system, developed for the European Space Agency, that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a program sketch from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut learning even with limited amount of in-domain training data.
# Knowledge Base Question Answering For Space Debris Queries Paul Darm1 Paul.Darm@Strath.Ac.Uk Antonio Valerio Miceli-Barone2 Amiceli@Ec.Ac.Uk Shay B. Cohen2 scohen@inf.ed.ac.uk Annalisa Riccardi1 annalisa.riccardi@strath.ac.uk 1 Mechanical and Aerospace Engineering, University of Strathclyde 2 School of Informatics, University of Edinburgh University ## Abstract Space agencies execute complex satellite operations that need to be supported by the technical knowledge contained in their extensive information systems. Knowledge bases (KB) are an effective way of storing and accessing such information at scale. In this work we present a system, developed for the European Space Agency (ESA), that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment. Our system is based on a pipeline which first generates a sequence of basic database operations, called a sketch, from a natural language question, then specializes the sketch into a concrete query program with mentions of entities, attributes and relations, and finally executes the program against the database. This pipeline decomposition approach enables us to train the system by leveraging out-of-domain data and semi-synthetic data generated by GPT-3, thus reducing overfitting and shortcut learning even with limited amount of in-domain training data. Our code can be found at https://github.com/ PaulDrm/DISCOSQA. ## 1 Introduction Space debris are uncontrolled artificial objects in space that are left in orbit during either normal operations or due to malfunctions. Collisions involving space debris can generate secondary debris which can cause more collisions, potentially leading to a runaway effect known as Kessler Syndrome (Kessler and Cour-Palais, 1978; Kessler et al., 2010), which in the worst-case scenario could make large ranges of orbits unusable for space operations for multiple generations. Therefore, space agencies have established departments responsible for cataloging the space debris environment, which can be used for space traffic management, collision avoidance, re-entry ![0_image_0.png](0_image_0.png) Figure 1: Two representative queries for DISCOS and their decomposition according to the Program Induction method. analysis, and raising public awareness of the problem.1 The European Space Agency (ESA) has catalogued over 40,000 trackable and unidentified objects in its DISCOS (Database and Information System Characterizing Objects in Space) Knowledge Base (KB) (Klinkrad, 1991; Flohrer et al., 2013). Accessing this information efficiently often requires technical expertise in query languages and familiarity with the specific schema of DISCOS, which may fall outside the skillset of the engineers searching for relevant information in the database. In this project, we developed a question answering system for the DISCOS KB. This deployed prototype enables ESA engineers to query the database with complex natural language (English) questions, improving their ability to make informed decisions regarding space debris. Recent breakthroughs in open question answering have been achieved using large language models that have been fine-tuned as dialog assistants, such as ChatGPT.2 These models, however, are 1https://tinyurl.com/44tc24d4 2https://chat.openai.com 487 black boxes that store knowledge implicitly in their parameters which makes it hard to guarantee that their answers are supported by explicit evidence, understand their failures and update them when the supporting facts change. In contrast, parsing a question into a query program and then executing it on an explicit KB is guaranteed to provide a factual correct answer provided the KB and query program are correct. Our approach is particularly useful for applications such as satellite operations where accuracy and reliability are critical. The main challenge for this project was that no training set or example questions were available for the DISCOS KB. This issue, combined with the large amount of unique and diverse objects in the database, precluded a straightforward application of common supervised learning techniques. Although possible strategies for solving this task, such as direct semantic parsing of the query with seq2seq models, were identified in the literature, they suffer from problems with compositional generalization (Herzig et al., 2021; Furrer et al., 2020). Furthermore, very little work has been done on generalizing to KB element components that were never seen during training (Cao et al., 2022b; Das et al., 2021a; Huang et al., 2021). To overcome these challenges, we apply and adapt a methodology from the literature called Program Transfer (Cao et al., 2022b) to significantly reduce the required dataset for adequate generalization over the complete DISCOS KB. This is a two-step approach. For each user query first a program sketch is predicted, consisting of a sequence of query functions where the arguments are either variables or placeholders, then the representation of the query is compared to the representations of the KB entities, in order to fill out the placeholders with arguments relevant to the query text. The underlying query language of this approach is called Knowledge-orientedProgramming-Language (KoPL) for which two representative example questions are shown together with their decomposition into sketch and arguments in Figure 1. We also conduct a data collection study with domain experts, and we apply a data augmentation pipeline leveraging the underlying ontology of the KB and prompting a Large Language Model (LLM) to generate automatically more training examples. The architecture was retrained with different domain-specific LMs and baselines to determine the benefits of using a domain-specific pretrained encoder. The main contributions of this paper are: - Applying and adapting a methodology described in the literature for complex knowledge base question answering (CKBQA) on a novel industry-relevant database, with a large and dynamic set of unique entities; - Collecting a new dataset on this database from domain-experts and leveraging the in-context learning capability of LLMs for data augmentation on it; - Evaluating the use of domain-specific LMs as different encoders on our curated dataset; - Demonstrating the effectiveness of the approach by achieving comparable results to generalpurpose LLMs ## 2 Related Work Low-resource CKBQA Pre-trained language models have demonstrated state-of-the-art performance in semantic parsing for complex question answering on KBs where the same logic compounds are contained in both the training and validation sets (Furrer et al., 2020). However, they struggle with compositional generalization, where the "compounds" (combinations) of components are diverse between training and validation, even if all components (entity, relation, program filters) have been seen during training (Herzig et al., 2021). Das et al. (2021b) explored retrieval-based methods to pick the top n similar examples from the training set and use them as additional input for the prediction. In theory, this would make it possible to reason over changes on the KB by only adding new examples to the training set without the need of retraining the whole model. Another approach is adapting the architecture of language models to incorporate the structure of a KB directly for the prediction. For example, Huang et al. (2021) ranked FreeBase KB entities by using an EleasticSearch search API to identify these entities. When generating the query program, instead of entities a special token is predicted, which in the post-processing step get replaced by the top ranked entity identified by ElasticSearch. Although, achieving good results, it is unclear how this would translate to queries with multiple entities and also has the typical limitations of ElasticSearch. Another method is the Program Induction and Program Transfer method, where a sequence of *functions*, or a *sketch*, is generated from the input query. A single function here stands for a basic logic operation on the KB. The premise is that the sketch is mostly dependent on the formulation of the input query and less dependent on the KB, therefore the training on a source domain can transfer to inference on a target domain. In a second step, the particular inputs that each function in the sketch receives are identified from the elements of the KB through comparing their representations with the one of the model at the specific function. During training, the goal is to create sophisticated representations for the components of the KB as well as for the query that can generalize to components which were not seen during training (Cao et al., 2022b). Domain-specific Language Models Using selfsupervised pre-trained language models is the defacto standard approach in modern natural language processing (NLP). These models are trained on large volumes of text, learning representations that can generalize over natural language variations and capture long-term dependencies between the input tokens. During fine-tuning, these learned features and representations commonly lead to improved results on the downstream task. While the interplay between the amount of in-domain data, model capacity and training regime is complex (Zhao et al., 2022), as a general rule, training these models on in-domain task-related text improves the performance of this task (Maheshwari et al., 2021; Berquand et al., 2021; Arnold et al., 2022; Joshi et al., 2023). An alternative approach involves modifying the pre-training objective according to the domain. In the context of question-answering with tabular data, it was explored how a language model could function during pre-training as a SQL-query excecutor, predicting the results of an automated created SQL query on the corresponding concatenated table, to elicit an understanding of the underlining dependencies in tables. This approach resulted in improved performance on related downstream tasks (Liu et al., 2022). For KBs, together with the standard Masked-Language-Modelling loss an architecture called Kepler was tested that also minimizes a contrastive loss on related KB triples, where both the correct triples and randomly perturbed incorrect triples are scored, and the loss penalizes scoring the perturbed triples higher than the correct ones (Wang et al., 2021a). ## 3 Dataset Collection And Use We used two sources of data for the study. The first one is the original dataset which first introduced the KoPL-format for general-domain knowledgebase question answering (described in §3.1). The second one is for our particular domain with information about objects in space (§3.2). We also describe in §3.3 how we further collected training data for fine-tuning a language model, and how we augmented this dataset with new question-answer pairs (§3.4). ## 3.1 The Kqa Pro Dataset The KQA pro dataset (Cao et al., 2022a) is a large scale dataset for complex question answering over a knowledge base. It contains over 120,000 diverse questions for an subset of entities from the Freebase KB and their associated relations, attributes and concepts. The reasoning process to arrive at a solution is provided in the form of a Knowledge-oriented-Programming-Language (KoPL), which was designed specifically for this dataset. The question-program pairs were automatically generated by randomly sampling the extracted KB and using novel compositional templates to create a canonical form of the question and associated answer. To increase ambiguity these questions were then paraphrased and controlled by Amazon Mechanical Turk workers (Cao et al., 2022a). ## 3.2 The Discos Database The ESA Database and Information System Characterising Objects in Space (DISCOS)3is a regularly updated source for information about launches, launch vehicles, objects, spacecraft registration numbers, mission-specific information (mass, mission objective, operator), and most importantly orbital and fragmentation data histories for all trackable as well as unclassified objects. With currently over 40,000 objects being tracked, this tracking provides rich information for ESA offices monitoring and managing space debris, collision avoidance, re-entry analyses, and contingency support. Other actors, such as research institutes, government organisations, or industrial companies from ESA Member states can apply for an account to access the information provided by DISCOS free of charge. A comparison between the the DIS- 3https://discosweb.esoc.esa.int/ COS KB and the KB used for the KQA Pro dataset can be seen in Table 1 | Dataset | #Entities | #Relations | #Concepts | |-----------|-------------|--------------|-------------| | KQA Pro | 16,960 | 363 | 794 | | DISCOS | 73,354 | 32 | 39 | Table 1: Entity, relation, and concept counts for the KQA Pro and DISCOS KBs ## 3.3 Data Collection As there was no question-program-answer training set available on the DISCOS KB, potential queries had to be collected from domain-experts via a simple user interface. The interface allowed domain experts to input queries of interest, along with their username and feedback, see Appendix E. Based on the domain experts' feedback, a manually labeled baseline dataset of around 102 question-KoPL program pairs was created. ## 3.4 Data Augmentation To extend the limited baseline dataset and increase diversity of potential queries, we augmented the dataset by creating paraphrases of the questions, shown to add robustness to question answering systems (Fader et al., 2013; Narayan et al., 2016). For each unique program sketch, the schema of the ontology was used to alter the arguments of the single functions in that sketch. For example, for the program Find('Saturn V') → *QueryAttr('mass')*, the concept of the *Find* function argument (*Saturn* V) was identified as *LaunchVehicle*. Subsequently, the ontology was queried for other entities of the same concepts and also for associated attributes to substitute the argument for the *QueryAttr* function. In order to generate appropriate questions for each "augmented" program, we used the fewshot and in-context learning capabilities of GPT-3 (Brown et al., 2020). A prompt was curated, consisting of question-program pairs from the manually labeled dataset, so that GPT-3 could generate a question for an unlabelled augmented program. The sampling included all programs from the manual dataset with the same sketch as the augmented one, as well as using examples with the same relation type, which showed great abilities to generate a correct question. Additionally, an instruction section was added to the prompt, consisting mainly of a list of expansions for acronyms commonly used in the KB's ontology. This ensured that acronyms were expanded correctly and reduced the likelihood of hallucinations by the LLM for these acronyms. Examples of generated questions with their corresponding programs can be found in Appendix D. The prompt schema can be found in Appendix A. The benefits of an automatically generated dataset include cost-effective data sample generation, while also ensuring a balanced distribution of complex and simple queries as well as common (ex: Saturn V, Hubble Space Telescope) and uncommon arguments (ex: L-186, PSLV-Q). With the use of LLMs, the generated questions also have already slightly different semantics and syntactic structure as the generation process is inherently statistical and can be adjusted with parameters such as the temperature. LLMs can also leverage their stored world knowledge, e.g. we observed the entity name " 天舟四号 " to be automatically translated into the english "Tianzhou 4" designation. More examples can be seen in Appendix D. However, it is important to note that the question generation is not foolproof and could be further optimized e.g. through additional prompt engineering or a subsequent data cleaning procedure. ## 4 Methodology And Model Architecture We describe in §4.1 the problem that our model aims to solve. In §4.2, we describe what modification we applied to the methodology from (Cao et al., 2022b). ## 4.1 Problem Definition The task is defined in the following way, given a natural language question Q we want to predict a program y that traverses the knowledge base K and produces an answer A for Q. This means: $$A=y(Q,K),\quad K=\{$$ A = y(Q, K), K = {*E, R, C, A*}, where *E, R, C, A* represent respectively the mutually disjoint sets of entities, relations, concepts and attributes in K. More specifically for a set entities in the training set Et, the task is to be able to generalise to the set of Ev, which were unseen during training, with E = Et ∪ Ev, Et ∩ Ev = ∅. Therefore, we apply a program induction and transfer methodology, predicting for Q a program, a tuple of actions, $$y(Q,K)=(o^{1}(a r g^{1}),\ldots,o^{t}(a r g^{t})),$$ ## O I ∈ O, ∀I = 1, . . . , T; Argt ∈ E ∪ R ∪ C ∪ A, where O is a set predefined basic functions executable on the KB, with each of them taking one disjoint set as inputs from the pool of arguments from the KB. In the first step, the sketch [o 1*, . . . , o*t] ∈ Otis generated by encoding the question Q with a pre-trained language model and using its representation as the starting point for a GRU-based decoder with attention mechanism (Bahdanau et al., 2015; Cao et al., 2022b). The input arguments for each function, o 1*, . . . , o*t, are chosen in a second step by calculating the probability between the encoded representations of the i-th candidate Rt i in the KB at position t in the sequence, and the representation of the decoder at position t, h t. The probability is computed as: and is followed by choosing the most likely candidate as the input argument for the function. We refer the reader to Cao et al. (2022b) for the details. In addition to the standard BERT model we also train RoBERTA-based (Liu et al., 2019) domain-specific encoders (see Appendix B). ## 4.2 Enhancements To Program Transfer At the beginning, the prediction for entities, relations, and operations ({*<, >,* =}) had to be reimplemented as it was not available in the associated source code.4 The standard methodology of comparing the representations to all inputs during training posed another challenge for entities, as during the training step the gradients for all them would need to be stored, which were exceeding the available virtual random access memory on the system. As a result, instead of comparing the representation at each sequence position to all entities in the KB, only the subset of entities in the current batch are compared with each other.5In addition, random samples from the complete entity set where selected and added to the batch to also give signals to entities not occurring in the training set. More formally, let X(j) be the jth batch from the training set, E be the overall set of entities, and X (j) Ebe the set of entities of the training batch. At each training batch j, we use the set of entities e (j) 4https://github.com/thu-keg/programtransfer 5Note that these entities are part of the training set entities, hence are all known apriori. to compare the probability of their representations against the candidate input argument in batch j, as defined by: $$e^{(j)}=X_{E}^{(j)}\cup f^{(j)},$$ $$(1)$$ where f (j) = {e1, e2*, . . . , e*n} are n randomly sampled entities from E without replacement. Furthermore, for extracting time values from queries we use the SUTime library (Chang and Manning, 2012), whereas for extracting numerical patterns we use a simple regex schema. In addition, for parsing the predicted inputs into a form than can be read by the KoPL engine a parser function had to be written as well as a method for assigning the dependencies between the single basic functions. Other small adaptation of the original approach included the addition of a normalization layer between the linear layers for the prediction of the single arguments as well as masking-out padding tokens from the loss calculation of the function generation component. A schematic overview of the pipeline can be seen in Appendix C. ## 5 Experiments We next describe our dataset (§5.1), the way we trained our models (§5.2) and our results (§5.3). ## 5.1 The Discos-Questions-Programs Dataset The creation of the training and validation datasets involved the following steps: First, all the unique sequences of functions were extracted from the manually labeled dataset. Then, ten augmented programs were generated for each unique program in the manually labeled dataset by substituting the inputs and generating questions as described in the data augmentation methodology. It is noteworthy that the entities included in the manually labeled dataset were not considered as candidates for the data augmentation. The questions for the augmented examples were generated using the large language model *code-davinci-002*. 6 Through trial and error, a temperature of 0.75 was used for generation as it produced diverse examples while still capturing the meaning of the associated KoPLprogram. Finally, the augmented dataset was split into training and validation datasets, where the original manually labeled dataset and 0.05% of randomly sampled examples from the augmented 6https://platform.openai.com/playground p(argt|R $$v r g^{t}|R_{i}^{t},h^{t}):$$ i, ht) = softmax(R $$\operatorname{ax}(R_{i}^{t},h^{t}),$$ dataset were used as the validation set and the rest of the augmented dataset was used for training. In a subsequent filtering step, programs that appeared in the validation set were filtered from the training dataset. The resulting DISCOS-QuestionsPrograms (DQP) dataset consists of 905 samples for training and 151 samples for validation. ## 5.2 Model Training The preliminary results indicated that training directly on the DQP dataset did not lead to convergence in entity prediction. We then proceeded to first train on the out-of-domain original KQA Pro dataset and then further train on the DQP dataset. The hyperparameters for pretraining on the KQA pro dataset were adopted from the original paper. For training the DQP dataset, the hyperparameters were left unchanged, except for an adapted learning rate for the decoder, which was set to 10−4instead of 10−3. For the experiments, different domain-adapted models were used as the encoder and then compared to each other as well as baseline models. For more information about the domain-adapted models see Appendix B. ## 5.3 Results The analysis of different models was challenging as multiple components (functions, entities, relations, etc.) need to be predicted to arrive at the full program that can be run against the KB. Therefore, the analysis was divided into two parts. Firstly, the accuracy at the lowest validation loss for each component was compared separately between all the different trained models, providing an overview of each model's best predictive performance for each component. The results can be seen in Table 2. Although no model consistently outperformed the others on all components, CosmicRoBERTa (Berquand et al., 2021) achieved the highest performance on four out of six accuracies. The validation loss curves showed that during training, the validation accuracy can drop for one component while it rises for another. The validation loss curves can be found in Appendix F. For deployment of a single model, it is necessary to identify a checkpoint where the model predicts accurately across all components. To obtain a more holistic view of the performance, the models were also compared by summing up the normalized validation losses for each component. The results of summing up the normalized validation losses with equal weights are shown in Ta- | Accuracy | BERT Kepler RoBERTa | CR | | | |------------|-----------------------|-------|-------|-------| | Function | 0.797 | 0.812 | 0.796 | 0.826 | | Entity | 0.874 | 0.887 | 0.907 | 0.927 | | Attribute | 0.955 | 0.948 | 0.948 | 0.948 | | Relation | 0.938 | 0.983 | 0.983 | 1 | | Concept | 0.872 | 0.896 | 0.92 | 0.89 | | Operations | 1 | 1 | 1 | 1 | ble 3. On average, the RoBERTa-base model has the lowest validation loss, followed by the Kepler model. However, no model consistently outperforms the others in terms of prediction accuracy. From an application perspective, the most important metrics are the accuracy in predicting functions and entities. To obtain the correct answer, the most crucial step is to predict the correct sequence of functions, and for multi-hop queries, it is essential to identify the correct starting entity. In addition, the number of unique entities is magnitudes higher than the number of unique attributes or relations in the KB, which makes identifying the right entity more difficult. Among the models considered, CosmicRoBERTa stands out as the model that performs well both in predicting functions and entities. Unfortunately, the Kepler model only sporadically showed improvements over RoBERTabase. The reason for this could be the very limited pre-training corpus, which as a result was significantly smaller than the one from CosmicRoBERTa As a result, for the purpose of deploying a single model, the decision was made to choose CosmicRoBERTa. | Accuracy | BERT Kepler RoBERTa | CR | | | |---------------------|-----------------------|--------|-------------|------| | Min. sum valid loss | 0.21 | 0.171 | 0.11 0.252 | | | Function | 0.79 | 0.734 | 0.775 0.789 | | | Entity | 0.874 | 0.887 | 0.894 0.927 | | | Attribute | 0.935 | 0.948 | 0.941 0.915 | | | Relation | 0.978 | 0.95 | 0.969 0.969 | | | Concept | 0.853 | 0.8841 | 0.908 0.927 | | | Operations | 1 | 0.97 | 1 | 0.94 | Table 3: Accuracy at the lowest validation loss summed over all compoennts. The best score for each component is highlighted in **bold**, second best in *italics*. CR stands for CosmicRoBERTa. The results of the prediction are overall very impressive, with high accuracy scores over all components. It is especially worth highlighting that from over 40,000 entities in the database, only 400 appear in the training and validation set and only 1 of 71 entities in the validation set also appear in the training set. Despite this, some models achieve an accuracy of over 90% in predicting entities, demonstrating their strong ability to generalize to entities not seen during training, which was a critical user requirement for our system. In addition, we benchmarked our method to recently released general purpose models such as ChatGPT7and GPT-4 (OpenAI, 2023). For each model, training set examples were randomly added to the prompt until the respective model's context limit is reached. Then the models were prompted to generate the right program for a question from the validation set. Our methodology has an overall accuracy of predicting the right program completely of 48%, which is higher than the performance of around 25% of ChatGPT and comparable with the performance of GPT-4 of around 50%. A detailed comparison can be found in Table 4. This further demonstrates the efficiency of our methodology as it can be run at a fraction of the necessary compute as well as locally on consumergrade hardware. | CosmicRoBERTa GPT-4 ChatGPT-3.5 | | | | |-----------------------------------|------|------|-------| | Functions | 0.79 | 0.79 | 0.516 | | Entities | 0.93 | 0.86 | 0.41 | | Relations | 0.97 | 0.87 | 0.32 | | Concepts | 0.93 | 0.82 | 0.61 | | Attributes | 0.92 | 0.84 | 0.72 | | Overall | 0.48 | 0.5 | 0.25 | ## 6 Conclusion We developed a system for ESA to address the challenge of answering complex natural language questions on their DISCOS KB. The main obstacles included a lack of training data, the diversity and regular updates of the database entries, and the need for an economically feasible solution. The program transfer for complex KBQA methodology was selected for its potential to reduce the amount of required training samples through transfer learning and its capability to potentially gener-7https://openai.com/blog/chatgpt alize for examples which were never seen during training. A data collection study was conducted with domain experts, which was then used to augment the data through leveraging the underlying ontology of the KB and prompting a large language model to generate fitting questions. The architecture was retrained with different domainspecific models and baselines to determine the benefits of using a domain-specific pre-trained encoder. Although the results were mixed, the best performance was achieved by CosmicRoBERTa, a pre-trained model on a space domain corpus. With an accuracy of over 90% of predicting the right entity on the validation set over the vast pool of candidate entities, the method demonstrates its strong ability to predict the correct input arguments for unseen examples. This is further demonstrated in the comparison with general-purpose models such as ChatGPT or GPT-4, where our method achieved competitive results. Therefore, this approach has the potential to be extended to other databases and query languages in the future, especially in scenarios where there are few to no training examples. ## Limitations The study has several clear limitations. Firstly, the training and validation datasets used in this study are still relatively small. A larger dataset would give more robust results for comparing different encoders. Additionally, the experiments were only conducted on the ability to generalize to unseen entities and not on the ability to generalize to unseen sketch types, which is also of key importance when addressing low resource CKBQA. Moreover, the methodology used in this study relies on annotated question-program pairs, which are expensive to collect. Learning only from question-answer pairs or even a question with an indicated difficulty based on whether the model was able to answer the question, could be more easier to collect. While the models achieve high accuracy on most of the knowledge base components, overfitting can occur at different stages during training, leading to high accuracy for one component at one training step but poor accuracy for another component at another step. In the future, revising the training procedure or the model setup may help address this issue. ## Ethics Statement In any safety-critical context like spacecraft operations, there is an inherent risk associated with the use of automatic methods supporting human operators. The transparency of the predicted programs could mitigate this issue as it allows even for an engineer with limited knowledge about the underlying query method to interpret the program to some degree. In any case, the developed systems might support human analysis and decision making by decreasing workload, but cannot replace it. As mentioned before the DISCOS KB can be accessed after creating a user account. We plan on publishing the created question-program paris and trained models online in accordance with ESA's guidelines. ## Acknowledgments This study was funded by the European Space Agency (ESA) under the Intelligent Operational Assistant project with contract No.AO/110776/21/D/SR. This work was also supported by the UKRI Research Node on Trustworthy Autonomous Systems Governance and Regulation (grant EP/V026607/1) which provided additional funding for Antonio Valerio Miceli-Barone. We thank Evridiki Ntagiou (ESA) and Paulo Leitao (Vision Space) for their roles as project leader and industrial partner, respectively. We are grateful for Callum Wilson's help (University of Strathclyde) in creating the GraphDB KG from the original DISCOSweb API, which was used to create the DISCOS KB. We also thank Sean Memery and Maria Luque Anguita (University of Edinburgh) for their contribution to the KEPLER data collection, and everyone who took part in the data collection for the first iteration of the DISCOSWebquestions dataset. We also thank Marcio Fonseca and the reviewers for useful feedback on the paper. ## References Alexandre Arnold, Fares Ernez, Catherine Kobus, and Marion-Cécile Martin. 2022. Extraction d'informations de messages aéronautiques (NOTAMs) avec des modèles de langue appris de façon auto-supervisée (information extraction from aeronautical messages ). In *Actes de la 29e Conférence* sur le Traitement Automatique des Langues Naturelles. Volume 1 : conférence principale, pages 335–344, Avignon, France. ATALA. gio. 2015. Neural machine translation by jointly learning to align and translate. In *3rd International Conference on Learning Representations,* ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Audrey Berquand, Paul Darm, and Annalisa Riccardi. 2021. Spacetransformers: Language modeling for space systems. *IEEE Access*, 9:133111–133122. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing* Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Shulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Bin He, and Hanwang Zhang. 2022a. KQA pro: A dataset with explicit compositional programs for complex question answering over knowledge base. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6101–6119, Dublin, Ireland. Association for Computational Linguistics. Shulin Cao, Jiaxin Shi, Zijun Yao, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Zhiyuan Liu, and Jinghui Xiao. 2022b. Program transfer for answering complex questions over knowledge bases. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8128–8140, Dublin, Ireland. Association for Computational Linguistics. Angel X. Chang and Christopher Manning. 2012. SUTime: A library for recognizing and normalizing time expressions. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3735–3740, Istanbul, Turkey. European Language Resources Association (ELRA). Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. 2021a. Case-based reasoning for natural language queries over knowledge bases. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9594–9611, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. 2021b. Case-based reasoning for natural language queries over knowledge bases. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9594–9611, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In *Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 1608–1618, Sofia, Bulgaria. Association for Computational Linguistics. T Flohrer, S Lemmens, B Bastida Virgili, H Krag, H Klinkrad, E Parrilla, N Sanchez, J Oliveira, and F Pina. 2013. Discos-current status and future developments. In *Proceedings of the 6th European Conference on Space Debris*, volume 723, pages 38–44. Daniel Furrer, Marc van Zee, Nathan Scales, and Nathanael Schärli. 2020. Compositional generalization in semantic parsing: Pre-training vs. specialized architectures. Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, and Yuan Zhang. 2021. Unlocking compositional generalization in pre-trained models using intermediate representations. *ArXiv* preprint, abs/2104.07478. Xin Huang, Jung-Jae Kim, and Bowei Zou. 2021. Unseen entity handling in complex question answering over knowledge base via language generation. In *Findings of the Association for Computational* Linguistics: EMNLP 2021, pages 547–557, Punta Cana, Dominican Republic. Association for Computational Linguistics. Harshit Joshi, Abishai Ebenezer, José Cambronero, Sumit Gulwani, Aditya Kanade, Vu Le, Ivan Radiek, and Gust Verbruggen. 2023. Flame: A small language model for spreadsheet formulas. Donald J. Kessler and Burton G. Cour-Palais. 1978. Collision frequency of artificial satellites: The creation of a debris belt. *Journal of Geophysical Research: Space Physics*, 83(A6):2637–2646. Donald J Kessler, Nicholas L Johnson, JC Liou, and Mark Matney. 2010. The kessler syndrome: implications to future space operations. Advances in the Astronautical Sciences, 137(8):2010. H. Klinkrad. 1991. Discos - esa's database and information system characterising objects in space. *Advances in Space Research - ADV SPACE RES*, 11:43– 52. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022. TAPEX: table pre-training via learning a neural SQL executor. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. *ArXiv preprint*, abs/1907.11692. Himanshu Maheshwari, Bhavyajeet Singh, and Vasudeva Varma. 2021. SciBERT sentence representation for citation context classification. In *Proceedings of the Second Workshop on Scholarly Document* Processing, pages 130–133, Online. Association for Computational Linguistics. Shashi Narayan, Siva Reddy, and Shay B. Cohen. 2016. Paraphrase generation from latent-variable PCFGs for semantic parsing. In Proceedings of the 9th International Natural Language Generation conference, pages 153–162, Edinburgh, UK. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021a. KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176–194. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhengyan Zhang, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2021b. KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9:176–194. Zheng Zhao, Yftah Ziser, and Shay B Cohen. 2022. Understanding domain learning in language models through subpopulation analysis. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. ## A Instructions For Generating Questions "Here is a list of knowledge graph query programs in JSON format, each with its corresponding question in English. Acronyms are expanded with the following dictionary: ["GEO": "Geostationary Orbit", "IGO":"Inclined Geosynchronous Orbit", "EGO":"Extended Geostationary Orbit", "NSO":"Navigation Satellites Orbit", "GTO":"GEO Transfer Orbit", "MEO":"Medium Earth Orbit", "GHO":"GEO-superGEO *Crossing* Orbits", "LEO":"Low Earth Orbit", "HAO":"High Altitude Earth Orbit", "MGO":"MEO-GEO Crossing Orbits", "HEO":"Highly Eccentric Earth Orbit", "LMO":"LEO-MEO Crossing Orbits", "UFO":"Undefined Orbit","ESO":"Escape Orbits"] Program: {program} Question: {question} (Repeated until maximum prompt limit is reached) ## B Domain-Specific Language Models CosmicRoBERTa A domain-specific language model from the SpaceTransformer family trained with basic masked-language-modelling on a corpus consisting of 75M words (Berquand et al., 2021).8 Kepler We trained our own domain-specific language model by appending the KB pre-training objective as described by Wang et al. (2021b). The in-domain text data for the Knowledge-augmented LM was specifically mined to be closely related to the topic of DISCOS. To achieve this, we collected scientific papers about space debris and documents from ESA about their internal mission operation procedures. Additionally, we added the online available dataset from SpaceTransformers9to the training data. The complete dataset comprises approximately 17.6 million words, with around 70% used for training and the remainder for validation. To predict triples, we converted our database into a set of (head, relation, *tail*) triples, where *head* and *tail* are entities represented by their English name or description, and *relation* is a relation represented by a unique token for the relation type. The KG triplet datasets consist of approximately 640,000 triples, which are split into 636,000 for training, 2,000 for validation, and 2,000 for testing. These triples represent approximately 59,000 entities and 32 relations. We trained our model using the available Kepler implementation.10 ## C Pipeline Components ![9_image_0.png](9_image_0.png) ## D Example Augmented Programs And Generated Questions [{" function ":" FindAll ", " inputs ":[] , " dependencies ":[]} , ![10_image_0.png](10_image_0.png) {" function ":" FilterConcept ", " inputs ":["LMO "] , " dependencies " :[0]} , {" function ":" Relate ", " inputs ":[" orbit "], " dependencies ":[1]} , {" function ":" FilterNum ", " inputs ":[" depth ", "0.3 ", "="] , " dependencies ":[2]} , {" function ":" FilterConcept ", " inputs ":[" UnknownObjClass "], " dependencies " :[3]} , {" function ":" Count ", " inputs ":[] , " dependencies " :[4]}] Question: How many objects are there that are currently in the LEO-MEO crossing orbits and are classified as unknown and have a depth of 0.3 m? {" function ":" Find ", " inputs ":[" State Remote Sensing Center "] , " dependencies ":[]} , {" function ":" Relate ", " inputs ":[" host_country "] , " dependencies " :[0]} , {" function ":" FilterConcept ", " inputs ":[" Entity "], " dependencies ":[1]} , {" function ":" What ", " inputs ":[] , " dependencies " :[2]}] Question: "Which operators are based in the host country of the State Remote Sensing Center?" {" function ":" FindAll ", " inputs ":[] , " dependencies ":[]} , {" function ":" FilterDate ", " inputs ":[" epoch ", " 2022 -04 -08 ", "="] , " dependencies " :[0]} , {" function ": " FilterConcept ", " inputs ":[" Launch "], " dependencies " :[1]} , {" function ": " Count ", " inputs ":[] , " dependencies " :[2]}] Question: "How many launches are planned for 8th of April 2022?"a aInterestingly the generated question implies that 8th of April 2022 lies in the future, which is in accordance with OpenAI's training set cut-off in September 2021 ## E Interface For Data Collection And Access To System For implementing a simple user interface, the popular Python library Streamlit was used. Besides providing the input query the user can also provide feedback, which potentially could be used to improve the model by identifying right or wrong answers and adding them to the training set. Another button allows the user to generate an answer for a question from the validation dataset to get a better feeling of what questions could be answered. ## Query Discosweb Database With Ntural Language ![11_image_0.png](11_image_0.png) F ## Validation Loss Curves ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png)
zhu-etal-2023-badge
{BADGE}: Speeding Up {BERT} Inference after Deployment via Block-wise Bypasses and Divergence-based Early Exiting
https://aclanthology.org/2023.acl-industry.48
Early exiting can reduce the average latency of pre-trained language models (PLMs) via its adaptive inference mechanism and work with other inference speed-up methods like model pruning, thus drawing much attention from the industry. In this work, we propose a novel framework, BADGE, which consists of two off-the-shelf methods for improving PLMs{'} early exiting. We first address the issues of training a multi-exit PLM, the backbone model for early exiting. We propose the novel architecture of block-wise bypasses, which can alleviate the conflicts in jointly training multiple intermediate classifiers and thus improve the overall performances of multi-exit PLM while introducing negligible additional flops to the model. Second, we propose a novel divergence-based early exiting (DGE) mechanism, which obtains early exiting signals by comparing the predicted distributions of two adjacent layers{'} exits. Extensive experiments on three proprietary datasets and three GLUE benchmark tasks demonstrate that our method can obtain a better speedup-performance trade-off than the existing baseline methods.{\textbackslash}footnote{Code will be made publicly available to the research community upon acceptance.}
# Badge: Speeding Up Bert Inference After Deployment Via Block-Wise Bypasses And Divergence-Based Early Exiting Wei Zhu1, Peng Wang2, Yuan Ni3, Guotong Xie3**, Xiaoling Wang**1∗ 1East China Normal University 2 Tomorrow Advancing Life 3 Pingan Health Tech ## Abstract Early exiting can reduce the average latency of pre-trained language models (PLMs) via its adaptive inference mechanism and work with other inference speed-up methods like model pruning, thus drawing much attention from the industry. In this work, we propose a novel framework, BADGE, which consists of two offthe-shelf methods for improving PLMs' early exiting. We first address the issues of training a multi-exit PLM, the backbone model for early exiting. We propose the novel architecture of block-wise bypasses, which can alleviate the conflicts in jointly training multiple intermediate classifiers and thus improve the overall performances of multi-exit PLM while introducing negligible additional flops to the model. Second, we propose a novel divergence-based early exiting (DGE) mechanism, which obtains early exiting signals by comparing the predicted distributions among the current layer and the previous layers' exits. Extensive experiments on three proprietary datasets and three GLUE benchmark tasks demonstrate that our method can obtain a better speedup-performance tradeoff than the existing baseline methods. ## 1 Introduction Since BERT (Devlin et al., 2019), the pre-trained language models (PLMs) have become the default state-of-the-art (SOTA) models for natural language processing (NLP). Recent years have witnessed the rise of many PLMs, such as GPT (Radford et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2020), and so forth. These BERT-style models achieved considerable improvements in many Natural Language Processing (NLP) tasks by pre-training on the unlabeled corpus and fine-tuning on labeled tasks, such as text classification, natural language inference (NLI), and named entity recognition (NER). Despite their outstanding performances, their industrial usage is still limited ∗Corresponding author: xlwang@cs.ecnu.edu.cn. by the high latency during inference (Tambe et al., 2020; Zhu, 2021). In addition, a special feature of consumer queries is that there are time intervals when the number of queries is exceptionally high. For example, food search engines will be used more often during dinner hours than usual. Thus, deployed pre-trained models need to adjust their latency dynamically. A branch of literature focuses on making PLMs' inference more efficient via adaptive inference (Zhou et al., 2020; Xin et al., 2020; Liu et al., 2020). The idea of adaptive inference is to process simple examples with only shallow layers of BERT and more difficult queries with deeper layers, thus significantly speeding up the inference time on average while maintaining high accuracy. Early exiting is one of the essential adaptive inference methods (Bolukbasi et al., 2017). As depicted in Figure 1, it implements adaptive inference by installing an early exit, i.e., an intermediate prediction layer, at each layer of PLM (multi-exit PLM) and early exiting "easy" samples to speed up inference. All the exits are jointly optimized at the training stage with BERT's parameters. At the inference stage, an early exiting strategy is designed to decide whether to exit at each layer given the currently obtained predictions (from previous and current layers) (Teerapittayanon et al., 2016; Kaya et al., 2019; Xin et al., 2020; Zhou et al., 2020). In this mode, different samples can exit at different depths. The average speedup ratio can be easily controlled with certain hyper-parameters without redeploying the model services or maintaining a group of models. In this work, we propose a novel framework, BADGE, to improve the early exiting performances of PLMs. BADGE consists of two modifications to the current early exiting literature. First, we propose to add a block-wise bypass to each transformer block so that two different representations can be produced, one for the current layer's exit ![1_image_0.png](1_image_0.png) and the other passed to the next transformer block. With this mechanism, an intermediate transformer block will not be distracted by the conflicting tasks in multi-exit BERT training, thus providing high-quality representations for the subsequent layers. As a result, the overall performance of the multi-exit BERT will improve. Second, we propose a novel divergence-based early exiting method (DGE), which is effective in mining early exiting signals by comparing the predicted probability distributions of the current and previous layers. Extensive experiments and ablation studies are conducted on the GLUE benchmark datasets. The experimental results show that: (a) training multiexit PLM with block-wise bypasses consistently performs better than the previous SOTA multiexit model training methods, thus providing a better backbone for early exiting. (b) we show that, with the same multi-exit PLM backbone, our DGE method can provide better efficiency-performance trade-offs than the previous SOTA early exiting methods. Thus, with our framework, BADGE, a PLM can achieve significantly better early exiting performances. The main contributions of our BADGE framework are two-fold: (a) We propose a novel method, block-wise bypass, to improve the training of multi-exit PLMs. (b) We propose a novel divergence-based early exiting method, DGE, which outperforms the previous SOTA early exiting methods. ## 2 Related Work Due to the length limit, readers are referred to Appendix A for more related works on more inference speedup methods and dynamic early exiting mechanisms. ## 2.1 Early Exiting Early exiting requires a multi-exit network, a neural network backbone with an intermediate classifier (or exit) installed on each layer. Early exiting literature mainly focuses on the design of the early exiting strategies (Teerapittayanon et al., 2016; Xin et al., 2020; Kaya et al., 2019; Xin et al., 2021; Sun et al., 2022; Zhou et al., 2020; Schuster et al., 2021). However, the literature address less attention to the training of the multi-exit neural network. There are three types of training methods for training the multi-exit neural network: (a) joint training (JT) (Teerapittayanon et al., 2016; Zhou et al., 2020), that is, all the exits are jointed optimized together with the fine-tuning of BERT. (b) two-stage training method (2ST) (Liu et al., 2020; Xin et al., 2020), which first fine-tunes the backbone BERT and the last layer's exit till convergence and trains only the intermediate exits by distilling knowledge from the last exit. (c) BERxiT (Xin et al., 2021) propose an alternating training (ALT) method, combining 2ST and JT. Our work complements the literature on early exiting by proposing the BADGE framework to improve early exiting performance via the novel design of block-wise bypasses and a novel early exiting mechanism. ## 3 Methodology 3.1 Block-Wise Bypasses We now present the core of our novel BADGE framework: the block-wise bypasses (depicted in Figure 1). Denote the representations of the input sentence from the previous transformer block as Hm−1. Initially, Hm−1 will go through the current transformer block, first the multi-head selfattention (MHSA) module with the residue to become H*m,MHA*, then the positional feed-forward (FFN) module with residue, to become Hm,F , and finally a LayerNorm operation to output Hm. We would like to employ an efficient bypass Bm to adjust the current layer's representations to fit the task better. Bm is simple in architecture (On the right side of Figure 1). Upon receiving the input Hm−1, Bm down-projects it to H (1) m,B, from dimension d to dimension r (where *r << d*) using a down-projection matrix W*down* ∈ R d×r. The H (1) m,B will go through a non-linear activation function g1, and become H (2) m,B. H (2) m,B will then be up-projected to H (3) m,B with dimension d, by an up-projection matrix Wup ∈ R r×d. We refer to r as the bottleneck dimension. Formally, Bm can be expressed as: $$H_{m,B}^{(3)}\gets g_{1}(H_{m-1}W_{d o w n})W_{u p}.\qquad(1)$$ Finally, the current transformer block will output two representations, Hm, which is the original intermediate representation, and H ′ m, which is modified by the bypass, via: $$\begin{array}{l}{{H_{m}^{'}=\mathrm{LayerNorm}(H_{m,B}^{(3)}+H_{m,F}),}}\\ {{H_{m}=\mathrm{LayerNorm}(H_{m,F}).}}\end{array}\qquad(2)$$ Hm will be passed to the following transformer block as input, and H ′ m is the modified representation which will be the input of the current layer's exit. We rely on the bypass Bm to extract taskspecific representations to facilitate the learning of the exit. Note that the introduction of Bm will not bring little computational overhead since (a) the two representations Hm and H ′ m can be calculated in a single forward pass; (b) the bottleneck dimension r can be very small, like 16, so that the added parameters or flops are negligible compared to the pre-trained backbone. Our work is inspired by the recent work in parameter efficient tuning (He et al., 2021). However, our work is different in the following three aspects: (a) our work deals with the training of multi-exit BERT, not the parameter-efficient tuning settings. (b) Our work introduces the block-wise bypasses, while He et al. (2021) only considers the bypasses around the feed-forward layer or the self-attention layer of a transformer block. We will show in our experiments that block-wise bypasses perform better under our settings. (c) We add encoding operations inside the bypasses, which proves to be beneficial. ## 3.2 Divergence-Based Early Exiting We now introduce our divergence-based early exiting method, DGE. The inference procedure is illustrated in Figure 1. Assume the forward pass has reached layer m. Denote the divergence score between the prediction results of layer m and layer m ′(*m > m* ′) as sm,m ′ = S(pm, pm ′) ∈ R, where S(·, ·) is a divergence measure of two probability distributions. The smaller the value of sm,m ′ , the predicted distributions pm and pm ′ are more consistent with each other. Denote cntm as the number of previous layers that have a predicted distribution that is consistent with pm. At layer m, cntm is calculated as: $$c n t_{m}=\sum_{i=1}^{m-1}\mathbb{1}(s_{m,i}<\tau),\qquad\qquad(3)$$ where τ > 0 is a pre-defined threshold, and 1(·) is the indicator function. 1(x) is equal to 1 if x is true, and equal to 0 if x is not true. If cntm reaches the pre-defined patience value t, the model stops inference and exits early. Otherwise, the model goes to the next layer. If the model does not exit early at intermediate layers, the model uses the final classifier fM for prediction. Unless stated otherwise, we will mainly use the knowledge distillation objective (Hinton et al., 2015) as the divergence measures:1 $$S(p_{i},p_{j})={\mathcal{L}}_{K D}(p_{i},p_{j})=-\sum_{k=1}^{K}p_{i}(k)\log(p_{j}(k)).\eqno(4)$$ Note that the above-described framework is flexible and general, wrapping the patience-based method (Zhou et al., 2020) as a particular case. In Zhou et al. (2020), the value of si,i−1 is 0 only 1In our initial experiments, we find that different divergence measures like Kullback–Leibler divergence and JensenShannon divergence perform comparably. Thus we adopt the one with the most simple form. when the labels predicted by two layers are identical; otherwise, it is set to 1. However, compared to PABEE, our method has the following advantages: (a) even if the the current and previous layers give the same label, their output distributions might differ. Thus, PABEE might exit early even when the layers have not reached a consensus in the predictions. Thus, PABEE's early exiting performances with low patience parameters may not be reliable. Our method compares the intermediate layers in the distribution level, providing more reliable exiting signals. (b) PABEE can not adjust the speedup ratio flexibly, while our method can achieve different speedup ratios by setting patience and threshold parameters. ## 4 Experiments 4.1 Datasets We evaluate our proposed approach to the classification tasks on three tasks from GLUE benchmark (Wang et al., 2018) (RTE, MRPC, MNLI) and three proprietary natural language understanding tasks (QID, DoS, QDR) we collected for developing and testing question-answering or dialogue systems. The detailed introductions of the datasets are put in Appendix B. ## 4.2 Baseline Methods For multi-exiting BERT training, we compare our BADGE framework with the following baselines: (a) joint training (JT) (Zhou et al., 2020; Teerapittayanon et al., 2016); (b) two-stage training (2ST) (Liu et al., 2020; Xin et al., 2020); (c) alternating training (ALT) (Xin et al., 2021); (d) Gradient equilibrium (GradEquil) (Li et al., 2019), which incorporates JT with gradient adjustments; (e) fine-tuning ALBERT with a single exit of different depths separately (Single-exits); (f) Global Past-Future, which develops a series of techniques to enhance the performances of lower layers by mimicking the deeper layers. We also implement JT with a multi-head attention exit (Liu et al., 2020) (JT+MHA-exit). We compare the early exiting performances of our DGE method with the following early exiting methods: (a) Entropy-based method (Entropy) originated from (Teerapittayanon et al., 2016); (b) Maximum probability-based method (Maxprob) (Schwartz et al., 2020); (c) Patience-based method (Patience) (Zhou et al., 2020); (d) learningto-exit based method (LTE) proposed by Xin et al. (2021), which incorporates a meta layer to evaluate the confidence of the current exit. ## 4.3 Exprimental Settings For the GLUE tasks, we use the open-sourced ALBERT-base (Lan et al., 2020) as the backbone. And for the three proprietary tasks, we use a pretrained Chinese ALBERT-base2. In the ablation studies, we also consider BERT-base (Devlin et al., 2019) (in English and in Chinese). Our BADGE model with block-wise bypasses is optimized following the JT method with simple linear exits. Since we are training multi-exit models in which each layer's exit has a performance score, we mainly focus on improving S-avg, which denotes the cross-layer average score for a given metric S. We also report S-best, the best score among all the layers for the given metric S. Following prior work on input-adaptive inference (Teerapittayanon et al., 2016; Kaya et al., 2019), inference is on a per-instance basis, i.e., the batch size for inference is set to 1.3 The average speedup ratio on the test set of each task will be reported, which is defined as Speedup = 1 − PN*test* 1ti PN*test* 1Ti , where N*test* is the number of samples on the test set, tiis the inference time under early exiting, and Tiis the inference time without early exiting. We implement our BADGE and all the baselines on the base of Hugging Face's Transformers (Wolf et al., 2020). Experiments are conducted on four Nvidia RTX 3090 GPUs. We report the median performance over five runs with different random seeds. More detailed settings regarding hyper-parameters can be found in Appendix C. ## 4.4 Overall Comparison | RTE | MRPC | MNLI | QID | DoS | QDR | | | | | | | | |-------------------------------|----------|------------|-------------|---------|----------|---------|----------|---------|----------|--------|---------|------| | ALBERT-base fine-tuning | | | | | | | | | | | | | | acc | acc-f1 | acc | acc | mf1 | f1 | | | | | | | | | ALBERT | 77.6 | 89.1 | 83.7 | 91.9 | 88.0 | 87.3 | | | | | | | | Multi-exit ALBERT fine-tuning | | | | | | | | | | | | | | acc-avg | acc-best | acc-f1-avg | acc-f1-best | acc-avg | acc-best | acc-avg | acc-best | mf1-avg | mf1-best | f1-avg | f1-best | | | Single exits | 67.5 | 77.6 | 85.4 | 89.1 | 76.4 | 83.7 | 89.5 | 92.3 | 82.9 | 88.0 | 83.3 | 87.3 | | 2ST | 68.9 | 77.6 | 83.0 | 89.1 | 76.2 | 83.7 | 89.3 | 91.9 | 82.5 | 88.0 | 81.9 | 87.3 | | JT | 66.8 | 72.5 | 84.4 | 87.9 | 76.0 | 83.8 | 89.2 | 91.1 | 81.6 | 87.3 | 82.2 | 87.2 | | JT+MHA exit | 67.5 | 75.9 | 84.8 | 87.9 | 76.8 | 83.2 | 89.3 | 91.5 | 81.7 | 87.6 | 82.3 | 87.1 | | GradEquil | 67.3 | 77.4 | 84.2 | 88.4 | 76.5 | 83.6 | 89.2 | 91.8 | 82.4 | 88.0 | 82.4 | 87.0 | | ALT | 68.5 | 77.8 | 84.6 | 88.3 | 76.6 | 82.9 | 88.6 | 90.6 | 82.3 | 87.8 | 81.5 | 86.8 | | Global Past-Future | 68.1 | 78.4 | 84.3 | 88.1 | 75.9 | 82.9 | 89.1 | 92.3 | 82.4 | 87.9 | 82.4 | 87.1 | | BADGE | 70.2 | 77.9 | 86.0 | 89.6 | 77.8 | 83.8 | 90.0 | 92.4 | 83.2 | 88.1 | 84.3 | 87.2 | work consistently outperforms the baseline methods in terms of the average performances across all the intermediate layers and the cross-layer best scores. The experimental results are foreseeable: introducing the block-wise bypasses can help the intermediate transformer layer to concentrate on providing hidden representations, while the bypasses can provide features more suitable for the current layer's exit. In this way, both the cross-layer best and cross-layer average scores can improve. Demonstrating that BADGE does not achieve improvements by merely adding more parameters It is natural to question whether the performance gains of BADGE are from additional parameters. With the bottleneck dimension r = 16, our bypass architecture introduces 26k parameters per layer. A multi-head attention exit (Liu et al., 2020) with a bottleneck dimension r = 32 introduces 98k parameters per layer. As shown in Table 1, JT with MHA exits performs better than JT (with simple linear exits). However, our BADGE method successfully outperforms JT with MHA exits with fewer parameters. The results demonstrate that BADGE's advantages come from designing our bypass mechanisms. ## 4.5 Dynamic Early Exiting Performances We compare our DGE method with the previous best-performing early exiting methods. For the patience-based method (Zhou et al., 2020), early exiting is run on different patience parameters. For the other methods, we run early exiting under different confidence thresholds or patience parameters so that the speedup-performance curves consist of at least 20 points evenly distributed across the interval (0, 1) of speedup ratios. The speedup-performance curves for the MRPC, QID, and QDR tasks are plotted in Figure 2. The following takeaways can also be made from Figure 2: (a) With the same backbone model trained with our BADGE framework, our DGE method achieves better speedup-performance tradeoffs than the previous SOTA early exiting methods, especially when the speedup ratio is large. (b) The comparison between Patience (with the model trained with BADGE) and JT+Patience (with the model trained with the JT method) demonstrates that our BADGE can provide superior backbones for early exiting and consistently result in superior performances under different speedup requirements. (c) Note that the pre-trained Chinese ALBERT-base model we use is a compressed model from a larger one. Our method can further speed up its inference, proving that it can work well with other model inference speedup methods like Jiao et al. (2019). ## 4.6 Ablation Studies Ablation on the placement of bypasses Since we want to provide separate hidden states in a single forward pass on the BERT backbone, our bypasses must be added to the end of the transformer layer. To show that the design of our block-wise bypasses is essential, we now consider another two placements of bypasses: (a) placed around the multi-head self-attention module, denoted as MHSA bypasses; (b) placed around the positional feed-forward module (FFN bypasses). Note that MHSA bypasses can not provide two different hidden states at the end of the transformer block within a single forward pass, thus slowing down inference speed. The results on RTE and QID tasks are reported in Table 2. We can see that: (a) FFN bypasses perform comparably with the block-wise bypasses; (b) MHSA bypasses perform worse than the block-wise bypasses since, in this setting, the ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) ![5_image_2.png](5_image_2.png) FFN module in a transformer block still has to conduct multi-task learning. Different bottleneck dimensions r We set the bottleneck dimension r to be 16 for the main part of the experiments (as in Table 1). We now conduct experiments for r on the RTE and QID tasks, and Table 3 reports the results. From Table 3, we can see that: (a) smaller bottleneck dimensions do not result in significant performance drops. (b) bottleneck dimensions do not provide performance improvements, demonstrating that the superior performances of our BADGE indeed come from our block-wise bypass mechanism instead of the additional parameters. Ablation on different PLMs To show that our BADGE framework is off-the-shelf and can work well with other pre-trained models, we now switch the backbone model to BERT-base (Devlin et al., 2019), or ElasticBERT (Liu et al., 2022). Due to limited length, the results are reported in Table 6 of Appendix D. The results demonstrate that BADGE outperforms the JT and ALT methods under different backbones by clear margins. Also, note that the cross-layer average scores of BERT-base and ElasticBERT are lower than that of ALBERT-base, showing that ALBERT is more suitable for early exiting. We hypothesize that the ALBERT model employs a cross-layer parametersharing strategy. Thus, the representations given by intermediate blocks are more similar to one another, and the performances of its intermediate layers will be closer. ## 5 Conclusion In this work, we propose a novel framework, BADGE, to improve the early exiting of pre-trained language models. First, we propose to add blockwise bypasses to facilitate the joint training of the intermediate exits, thus improving the overall performance of multi-exit PLMs. Second, we propose a novel divergence-based early exiting method, DGE, which can effectively mine the exiting signals by comparing the predicted distributions of the current and previous intermediate layers. DGE achieves better efficiency-performance trade-offs than the previous SOTA early exiting methods under the same multi-exit backbone. Our BADGE is off-the-shelf and can effectively speed up the inferences of PLMs with less performance degradation. ## 6 Acknowledgements This work was supported by NSFC grants (No. 61972155 and 62136002), National Key R&D Program of China (No. 2021YFC3340700), Shanghai Trusted Industry Internet Software Collaborative Innovation Center, and the Research, Development, Demonstration and Application Project of Artificial Intelligence (No. GJHZ20200731095001005). ## Limitations Although our BADGE framework is shown to be effective in improving the multi-exit model training and early exiting, it still has certain limitations that need to be addressed in the future: (a) blockwise bypasses indeed introduce new parameters and additional flops. We would like to explore more parameter-efficient methods to improve multi-exit model training in future works. (b) In this work, we demonstrate our framework's performance on sentence classification or pair classification tasks. In future works, we would like to extend our work to broader tasks such as sequence labeling, relation extraction, and text generation. We would like to explore this aspect in future work. ## Ethics Statement Our BADGE framework is designated to improve the training of multi-exit BERT and dynamic early exiting performances. Our work can facilitate the deployment and applications of pre-trained models on devices with less powerful computation capabilities, making the state-of-the-art models accessible for everyone. In addition, we hope this technology can help reduce the carbon footprints of NLP-based applications. Furthermore, the GLUE datasets we experiment with are widely used in previous work. Thus, to our knowledge, our work does not introduce new ethical concerns. ## References Haoli Bai, Wei Zhang, Lu Hou, Lifeng Shang, Jing Jin, Xin Jiang, Qun Liu, Michael Lyu, and Irwin King. 2020. Binarybert: Pushing the limit of bert quantization. *arXiv preprint arXiv:2012.15701*. Tolga Bolukbasi, J. Wang, O. Dekel, and Venkatesh Saligrama. 2017. Adaptive neural networks for efficient inference. In *ICML*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. *arXiv preprint arXiv:1909.11556*. Mitchell A Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing bert: Studying the effects of weight pruning on transfer learning. *arXiv preprint* arXiv:2002.08307. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2021. Towards a unified view of parameter-efficient transfer learning. ArXiv, abs/2110.04366. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. *arXiv preprint arXiv:1909.10351*. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling bert for natural language understanding. *ArXiv*, abs/1909.10351. Y. Kaya, Sanghyun Hong, and T. Dumitras. 2019. Shallow-deep networks: Understanding and mitigating network overthinking. In *ICML*. Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. 2021. I-bert: Integeronly bert quantization. In International conference on machine learning, pages 5506–5518. PMLR. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. *ArXiv*, abs/1909.11942. Hao Li, Hong Zhang, Xiaojuan Qi, Ruigang Yang, and Gao Huang. 2019. Improved techniques for training adaptive deep networks. *2019 IEEE/CVF International Conference on Computer Vision (ICCV)*, pages 1891–1900. Weijie Liu, P. Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, and Q. Ju. 2020. Fastbert: a self-distilling bert with adaptive inference time. *ArXiv*, abs/2004.02178. Xiangyang Liu, Tianxiang Sun, Junliang He, Jiawen Wu, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, and Xipeng Qiu. 2022. Towards efficient NLP: A standard evaluation and a strong baseline. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3288–3303, Seattle, United States. Association for Computational Linguistics. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. 2021. Compacter: Efficient low-rank hypercomplex adapter layers. In *NeurIPS*. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108. Tal Schuster, Adam Fisch, Tommi Jaakkola, and Regina Barzilay. 2021. Consistent accelerated inference via confident adaptive transformers. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4962–4979, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. 2020. The right tool for the job: Matching model and instance complexities. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 6640–6651, Online. Association for Computational Linguistics. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. *arXiv preprint arXiv:1908.09355*. Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, and Xipeng Qiu. 2022. A simple hash-based early exiting approach for language understanding and generation. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 2409–2421, Dublin, Ireland. Association for Computational Linguistics. Thierry Tambe, Coleman Hooper, Lillian Pentecost, EnYu Yang, Marco Donato, Victor Sanh, Alexander M. Rush, David M. Brooks, and Gu-Yeon Wei. 2020. Edgebert: Optimizing on-chip inference for multitask nlp. *ArXiv*, abs/2011.14203. Surat Teerapittayanon, Bradley McDanel, and H. T. Kung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. *2016 23rd International Conference on Pattern Recognition (ICPR)*, pages 2464–2469. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. DeeBERT: Dynamic early exiting for accelerating BERT inference. In *Proceedings* of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2246–2251, Online. Association for Computational Linguistics. Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. Berxit: Early exiting for bert with better finetuning and extension to regression. In *Proceedings* of the 16th conference of the European chapter of the association for computational linguistics: Main Volume, pages 91–104. Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. Bert-of-theseus: Compressing bert by progressive module replacing. *arXiv preprint* arXiv:2002.02925. Z. Yang, Zihang Dai, Yiming Yang, J. Carbonell, R. Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In *NeurIPS*. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, and Yoav Artzi. 2020a. Revisiting fewsample bert fine-tuning. *ArXiv*, abs/2006.05987. Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020b. Ternarybert: Distillation-aware ultra-low bit bert. *arXiv preprint* arXiv:2009.12812. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: Fast and robust inference with early exit. ArXiv, abs/2006.04152. Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. *arXiv preprint arXiv:1710.01878*. Wei Zhu. 2021. Autonlu: Architecture search for sentence and cross-sentence attention modeling with redesigned search space. In *NLPCC*. ## A Appendix: Related Work A.1 Inference Acceleration Methods Since the rise of BERT, there are quite large numbers of literature devoting themselves to speeding up the inference of BERT. Standard method include direct network pruning (Zhu and Gupta, 2017; Xu et al., 2020; Fan et al., 2019; Gordon et al., 2020), distillation (Sun et al., 2019; Sanh et al., 2019; Jiao et al., 2020), Weight quantization (Zhang et al., 2020b; Bai et al., 2020; Kim et al., 2021) and Adaptive inference (Zhou et al., 2020; Xin et al., 2020; Liu et al., 2020). Among them, adaptive inference has drawn much attention. Adaptive inference aims to deal with simple examples with only shallow layers of PLMs, thus speeding up inference time on average. ## A.2 Early Exiting Mechanisms Early exiting requires a multi-exit BERT, a BERT backbone with an intermediate classifier (or exit) installed on each layer. Early exiting literature mainly focuses on the design of the early exiting strategies, that is, determining when an intermediate exit's prediction is suitable as the final model prediction. Score based strategies (Teerapittayanon et al., 2016; Xin et al., 2020; Kaya et al., 2019; Xin et al., 2021), prior based strategies (Sun et al., 2022) and patience based strategies (Zhou et al., 2020) have been proposed. Teerapittayanon et al. (2016) uses the entropy of an intermediate layer's predicted distribution to measure the in-confidence level and decide whether to exit early. PABEE determines whether to exit by comparing the current layer's prediction with the previous layers. ## A.3 Introductions To More Multi-Exit Bert Training Methods A.3.1 Two-Stage Training Method The two-stage (2ST) (Xin et al., 2020; Liu et al., 2020) training strategy divides the training procedure into two stages. The first stage is identical to the vanilla BERT fine-tuning, updating the backbone model and only the final exit. In the second stage, we freeze all parameters updated in the first stage and fine-tune the remaining exits separately: $$\begin{array}{l}\mbox{Stage1}:{\cal L}_{stage1}={\cal L}_{M}^{CE}(y_{i},f_{M}(x_{i};\theta_{M}))\\ \mbox{Stage2}:{\cal L}_{stage2}={\cal L}_{m}^{CE},m=1,...,M-1.\end{array}\tag{5}$$ where ${\cal L}_{m}^{CE}={\cal L}_{m}^{CE}(y_{i},f_{m}(x_{i};\theta_{m}))$ denotes the cross-entropy loss of m-th exit. ## A.3.2 Alternating Training This method alternates between joint training and two-stage training across different optimization steps, and it was proposed by BERxiT (Xin et al., 2021): $${\mathrm{Odd}}:{\mathcal{L}}_{s t a g e1}={\mathcal{L}}_{M}^{C E}(y_{i},f_{M}(x_{i};\theta_{M}))$$ $${\mathrm{Even}}:{\mathcal{L}}_{j o i n t}=\sum_{m=1}^{M}{\mathcal{L}}_{m}^{C E}$$ # B Appendix for datasets and evaluation $\mathbf{A}$ A = 1. metrics ## B.1 Introduction To Our Proprietary Tasks In this work, we experiment with three proprietary datasets collected by our teams. The samples in these datasets are collected from users of a realworld production environment and are manually annotated by an data annotation team. Query intent classification in dialogues (denoted as QID) This task asks a model predict the intent of an utterance from a user in a dialogue. The dialogue history are also provided as a part of the input text sequence. The samples of this task is collected from the dialogues between customers and clerks in grocery stores, hotels, pharmaceutical stores and gyms. All the participants of the dialogues have signed the data collection agreements. We assign a single label to each user utterance, and the number of intent labels are 78. The evaluation metric is accuracy (acc). Query-doc ranking task (denoted as QDR) In this task, for a given search query in the scientific domain (including topics like bio-medicine, drug development and artificial intellegence), a pool of documents are given. The documents are either blog posts from our institution or abstracts of academic journal papers from Medline4. The pool size is between 12 to 128. The relevant documents are labeled as 1, and the other are labeled as 0. The annotation is based on whether the content of a document can provide sufficient answers to the query. In this task, a model is expected to predict the relevancy label for each query-document pair with high accuracy. The evaluation metric for this task is F1 of query-document relevancy labels (f1). Document classification (denoted as DoS) In this task, a pre-defined label is assign to each document, to help guide the users to find articles of interests. The collection of documents have the same sources with QDR. The number of labels are 109. In this task, a classification model reads the text contents and predicts the label of a document. We develop this dataset in the hope that the model can help us automatically organize the documents, or at least facilitate the human workers to do so to reduce costs. The evaluation metric for this task is macro F1, since this task has in-balanced label distributions (macro-f1). (7) (8) $\frac{1}{2}$ ## B.2 Evaluation Metrics Of Glue Tasks For RTE, the metric is accuracy (acc). For MRPC task, the metric the average of the accuracy and F1 score (acc-f1). For MNLI, the metric is the average of the accuracy score on the matched and mis-matched test subsets (denoted as acc). 4https://www.medline.com/ | Category | Datasets | |train| | |dev| | |test| | |Y| | Type | Labels | |-----------------|------------|-----------|---------|----------|-------------------------|------------------------------------|----------------------------| | Single-sentence | QID | 125365 | 15670 | 15670 | 78 | intent classification | 0, 1, ..., 77 | | DoS | 1083278 | 135410 | 135410 | 109 | document classification | 0, 1, ..., 108 | | | MNLI | 390702 | 2000 | 19647 | 3 | NLI | entailment, neutral, contradiction | | | Sentence-pair | RTE | 2490 | 138 | 139 | 2 | NLI | entailment, not entailment | | MRPC | 3668 | 204 | 204 | 2 | paraphrase | equivalent, not equivalent | | | QDR | 3568930 | 446116 | 446116 | 2 | text matching | 0, 1 | | ## B.3 Dataset Splits For the QID, QDR and DoS tasks, we randomly split the annotation samples train/dev/test splits with 8:1:1 ratio. Since the original test sets of GLUE are not publicly available, we follow Zhang et al. (2020a) and Mahabadi et al. (2021) to construct the train/dev/test splits as follows: (a) for datasets with fewer than 10k samples (RTE, MRPC), we divide the original validation set in half, using one half for validation and the other for testing. (b) for larger datasets (MNLI), we split 2k samples from the training set as the development set, and use the original development set as the test set. The detailed dataset statistics are presented in Table 4. ## C Appendix For Experimental Settings We add a classification layer after each intermediate layer of the PLM as the exits. The exit is a simple linear layer by default, but we also consider the MHA exit suggested by Liu et al. (2020). We fine-tune the multi-exit BERT with one of the four bypasses architectures (in Section 3.1) or without bypasses, and the bottleneck dimension r is set as 16. The bottleneck dimension for multi-head attentional exit is set to be 32. We fine-tune models for at most 25 epochs with an Adam optimizer and warm-up. The warm-up steps are set to be 10% of the optimization steps. Early stopping with patience eight is performed, and the best checkpoint is selected based on the dev set performances. Since we are training multi-exit models in which each layer's exit have a performance score, we mainly focus on improving S-avg, which denotes the cross-layer average score for a given metric S. We also report S-best, the best score among all the layers for the given metric S. We perform grid search for the following hyperparameter search space: the {GELU, ReLU, SWISH, Tanh, Identity} for the activation functions in the bypasses; {16, 32, 64, 128, 512, 1024} | batch size | lr | g1 | | |--------------|------|------|-----------| | RTE | 16 | 1e-5 | GELU | | MRPC | 16 | 2e-5 | Tanh | | MNLI | 1024 | 5e-5 | GELU | | QID | 128 | 1e-5 | GELU | | DoS | 128 | 2e-5 | ReLU | | QDR | 1024 | 3e-5 | LeakyReLU | Table 5: Hyper-prameters adopted by our best BADGE models on each GLUE tasks. | RTE | QID | | | | |-------------------------|----------|---------|----------|------| | acc-avg | acc-best | acc-avg | acc-best | | | ElasticBERT as backbone | | | | | | JT | 62.3 | 70.7 | 88.6 | 92.0 | | ALT | 63.1 | 70.9 | 88.4 | 91.6 | | BADGE | 64.5 | 72.3 | 89.3 | 92.1 | | BERT-base as backbone | | | | | | JT | 61.5 | 68.9 | 87.5 | 91.3 | | ALT | 61.9 | 69.3 | 87.7 | 91.2 | | BADGE | 62.7 | 70.5 | 88.9 | 92.5 | for the batch size; {1e-5, 2e-5, 3e-5, 5e-5} for the learning rate. We implement our BADGE and all the baselines on the base of Hugging Face's Transformers (Wolf et al., 2020). Experiments are conducted on four Nvidia RTX 3090 GPUs. Table 5 presents the detailed hyper-parameters for the best performing BADGE model. The hyperparameters we present are: (a) training batch size (bsz); (b) learning rate (lr); (c) activation functions g in the bypasses. From Table 5, many tasks favor the GELU activation function. ## D Ablation Studies On Different Plms In the experiments of the main content, the PLM backbone is the ALBERT-base. In Table 6, we report the results using BERT-base and ElasticBERT as backbones. The results show that our BADGE works well with different pre-trained models.
prieur-etal-2023-k
K-pop and fake facts: from texts to smart alerting for maritime security
https://aclanthology.org/2023.acl-industry.49
Maritime security requires full-time monitoring of the situation, mainly based on technical data (radar, AIS) but also from OSINT-like inputs (e.g., newspapers). Some threats to the operational reliability of this maritime surveillance, such as malicious actors, introduce discrepancies between hard and soft data (sensors and texts), either by tweaking their AIS emitters or by emitting false information on pseudo-newspapers. Many techniques exist to identify these pieces of false information, including using knowledge base population techniques to build a structured view of the information. This paper presents a use case for suspect data identification in a maritime setting. The proposed system UMBAR ingests data from sensors and texts, processing them through an information extraction step, in order to feed a Knowledge Base and finally perform coherence checks between the extracted facts.
## K-Pop And Fake Facts: From Texts To Smart Alerting For Maritime Security Maxime Prieur**1, 2** and **Souhir Gahbiche**1and **Guillaume Gadek**1 Sylvain Gatepaille1and **Killian Vasnier**1and **Valerian Justine**1 (1) Airbus Defence and Space, 1 Bd Jean Moulin, 78990 Elancourt, France (2) CNAM, 292 rue Saint-Martin, 75003 Paris, France [maxime.prieur, souhir.gahbiche, guillaume.gadek]@airbus.com ## Abstract Maritime security requires full-time monitoring of the situation, mainly based on technical data such as radar or Automatic Identification System (AIS) but also from Open Source Intelligence like inputs (e.g., newspapers). Some threats to the operational reliability of this maritime surveillance, such as malicious actors, introduce discrepancies between hard and soft data (sensors & texts), either by tweaking their AIS emitters or by emitting false information on pseudo-newspapers. Many techniques exist to identify these pieces of false information, including using knowledge base population techniques to build a structured view of the information. This paper presents a use case for suspect data identification in a maritime setting. The proposed system UMBAR ingests data from sensors and texts, processing them through an information extraction step, in order to feed a Knowledge Base (KB) and finally perform coherence checks between the extracted facts. ## 1 Introduction One of the main challenges in the maritime domain is to ensure safety and security of ships: in and around harbors but also when they are offshore for several days. The security aspect has benefited from a renewed interest recently, due to piracy and trafficking. Most harbor administrations rely today on AI-powered investigation tools to perform a number of checks on each entering vessel: comparing the declared status of the ship, aggregating sensor data, and even searching the Web for news. Among the organizations that collect and disseminate information about maritime events, the Maritime Information Cooperation and Awareness Center (MICA) collects and relays useful information to all actors in the field of maritime industry. Its purpose is to process maritime security data worldwide. The 2022 annual report1summarizes the reports regularly sent to the maritime industry and analyses the trends observed as well as the evolution of modes of action. The sensor data mainly consists of radar and AIS signals. Every ship must emit its identity, speed, position and course at short time intervals. This information is received by all other vessels in reach as well as dedicated receiving stations, on the coast and in space. Based on sensor data, alerts related to the behavior of vessels are raised automatically: abnormal position, sudden change of direction, etc. This ensures a quick and efficient reaction of the police/security agents. Relevant information about ships and maritime events also occurs in a non-technical way, through the news (so-called "*soft data*"). Accidents, illegal events, presence of a vessel in blockade-regulated areas and even modifications in the financial structure of the proprietary company are highly susceptible to increase the risk of a ship entering a harbor. Malicious actors may use a large variety of techniques in order to perform covert operations, including trafficking, illegal fishing, piracy and smuggling. AIS are easy to tamper with, as ships may -illegally- decide to modify their identity, declare a false destination or even cease to emit. Another case of concern occurs when civil vessels are the main object of a crisis between international powers, such as the *Stena Impero* near Iran in 2019 or the wheat vessels in the Black Sea in 2022: different newspapers may diffuse contradictory information about the same ships, which may in turn be inconsistent with technical data. In these cases, we believe that smart tools are needed in order to refine information and help the analysts build a clear picture of the situation. To tackle these security challenges, we propose 1https://www.mica-center.org/en/home/download/ 2395/?tmstv=1673337653 UMBAR, a system to automatically collect, analyze and compare information from a variety of sources of data, resulting in a risk assessment that is practical for a security operator in the maritime domain. Such a system relies heavily on Natural Language Processing on the textual modality as well as on reasoning modules on the extracted knowledge. More precisely, the contributions of this article are the following: - a technical description of UMBAR, a complete operable system ranging from data collection to knowledge management, - evaluation elements at a statistical and methodological level for the constituting subsystems, - key points of attention towards a large-scale deployment of such a system. The article is structured as follows: section 2 provides a review of the literature on the topic of AI-assisted maritime surveillance, with a special focus on knowledge based approaches; section 3 presents our system UMBAR and each of its subsystems from Information Extraction to Alert Raising; performance evaluation elements are provided at a subsystem level in section 4. A prospective discussion is exposed in section 5 to explicit the remaining challenges of the system deployment. Finally, section 6 concludes this paper. ## 2 Related Works We structure our review of the literature along three streams: first, the identification of lies or manipulation on structured data. Detecting fake news has recently received a lot of attention, combining facts and language (Seddari et al., 2022); here we focus on the identification of dissimilarity between facts such as stored in knowledge bases (attributes, properties, relations). For media analysis purposes, this falls under the topic of "automatic fact-checking" (Guo et al., 2022). In the maritime use case, expert systems for alert raising are common to detect a change of destination for a commercial vessel, or even a change of shipowner or flag can usually be observed (Alaeddine and Ray, 2022). AIS systems can also be hacked to disseminate false information manufactured. The objective of these false messages (e.g. distress signals, false vessel locations, etc.) is to attract attention and trap the targeted vessels (Balduzzi et al., 2014). These operations of disinformation and deception are very dangerous: it is essential to identify them. Second, we focus our research on reasoning on facts in Knowledge Bases (KB), extracting the information using Knowledge base POPulation (K-POP), to automatically compute dissimilarity between text-extracted small Knowledge Graphs (KG) and to enable relation prediction; these applications are considered relevant for maritime security (Everwyn et al., 2019). Zhang et al. (2019) focus on the link prediction task with complex linked datasets. Their approach successfully captures crossover interactions between entities and relations when modeling KGs. d'Amato et al. (2022) propose an approach based on semantic similarity for generating explanations to link prediction problems on Knowledge Graphs. Bhowmik and de Melo (2020) propose a model based on a Graph Transformer that learns entity embeddings by iteratively aggregating information from neighboring nodes to tackle the problem in the case of graphs that evolve over time. Finally, our goal is to detect changes which occur over time and to evaluate information that evolves over time. Thus, this technique will identify a large number of alerts linked to a normal evolution of the characteristics of an entity. Dealing with temporal KB is still nothing trivial and mainly dealt with for Question-Answering where the relevant answers is dependent on time(Chen et al., 2022). Reasoning on such facts with intelligent systems is pretty much novel (Zhang et al., 2022), and still mainly dealt with by expert rules in operational systems. ## 3 System Breakdown In this section, we first sketch a view of UMBAR, then detail its two pillars: K-POP to extract the information, and coherence checks to perform the verification. ## 3.1 System Overview The strength of our system, illustrated in the figure 1, lies in its ability to efficiently handle the endto-end extraction and verification of valuable information from heterogeneous sources. The information contained in filtered sources is first extracted through the K-POP pipeline using transformerbased (Vaswani et al., 2017) language models. Extracted entities are compared to the existing ones ![2_image_0.png](2_image_0.png) in the KB to instantly detect and flag any inconsistencies. The end user is immediately aware of any potential alert and can then track down the cause from the relevant sources. ## 3.2 Information Extraction Relevant information such as named entities (locations, organizations, persons and equipment), events and relations between these entities are extracted using a pipeline of Natural Language Processing (NLP) modules (Prieur. et al., 2023). Named Entity Recognition (NER): The first component of this pipeline recognizes entities of interest in the text while assigning them a type with the help of the document. This block is instantiated by a fine-tuned RoBERTa (Liu et al., 2019) language model. Co-reference resolution: It is then necessary to group the mentions referring to the same textual entities. In this case, the pre-trained World-level coreference resolution model (Dobrovolskii, 2021) is used to find groups of words referring to the same concept. These results are combined with those of the first block to obtain clusters with the same type. Relation extraction: For this step we fine-tune the ATLOP model (Zhou et al., 2021) that produces an embedding of each entity at document scale before predicting the potential links (those with a predicted score greater than the null relation) between each couple. Entity Resolution: The previously extracted information constitutes a support for the entity resolution step. Each entity in the text is associated with an entity in the database, if possible. This allows to add new knowledge by completing the profile of the known entities or by creating new ones. In this pipeline, the entity resolution is solved by performing a search by mention and a selection by popularity. To each entity in the text, a list of KB entities is associated, that share the same type and at least one mention. In case the mentions do not return any results, an extended search is performed with the acronyms of these mentions. If no element is returned, the textual entity is added to the database. If several entities of the database match mentions of the textual cluster, a selection by popularity is applied, similarly to (Al-Badrashiny et al., 2017). The entity with the most occurrences, considering all mentions, is selected. ## 3.3 Coherence Checks The process described in the figure 2 concerns maritime events. When a new event occurs, entities involved in maritime events are extracted from either the text or the AIS message: equipment, locations and organizations in our case. The KB is browsed, and a search is launched to check three conditions: whether there is an event of the same nature2, involving the same ship(s) and occurring at the same date. (i) If these three conditions are met, a similarity score Simwd(Ei, Ej ) is computed, using weights wx for each attribute x. These weights take into account the relations and attributes that are likely to embed misinformation or false information: if 2Ten natures of events are identified: seizure/arrest, collision, damage, sink, attack, aground, entrance (a harbor),leave (a harbor), transshipment and traffic. ![3_image_0.png](3_image_0.png) the location and/or the unit involved in the two events -to be compared- are different, there is a higher probability that one of these events contains misinformation. Attributes in the KB are compared one by one using the Jaro-Winkler distance (Jaro, 1989; Winkler, 1990). This score ranges between -1 and 1. $$S i m_{w d}(E_{i},E_{j})=\sum_{\forall x\in S}w_{x}.d_{j w}(E_{i},E_{j})\quad(1)$$ where - S ∈ [Org, Loc, Equipment, Pers] - wx corresponds to the weight attributed to each entity. - djw(Ei, Ej ) corresponds to Jaro-Winkler distance between event Ei and event Ej . A threshold is fixed to 0.25: if the similarity score is less than 0.25, the event is considered to hold disinformation. If the similarity score is higher than the threshold, that means that both events are considered coherent and the information brought by the new event corresponds to the information already in the KB. (ii) If the three conditions are not satisfied, and an event involving the same ship on a *different date* already exists in the KB, additional information is considered by applying evolution models. Once evolution models are applied, the similarity score is computed again, and according to it, either the two events match or an incoherence between the two events is spotted. In the case where there are no events involving the same ship on a *different date* in the KB, there is a lack of information and the system cannot decide about coherence until more information is received. ## Evolution Models Matching time-distant facts requires to consider a wider spectrum of evolution (e.g. for position) during the time difference. A simple similarity between the two facts would not be effective. Three evolving models for real-world application on ships have been identified and patented (Vasnier et al., 2022). Each attribute in the KB (such as the name of the ship, its speed, location, etc.) is related to one of three types of evolution models, depending on the nature of attributes: - constant model: constant attributes such as IMO (International Maritime Organisation) which is a unique identification number for ships are related to this type of model, - predictable model: attributes that evolve over time such as the position, the direction or the speed of a ship are related to this type of model. The evolution of this kind of attributes is predictable with mathematical tools. Knowing the position of a ship and its heading direction, the geographical area in which the ship will be in a near future is predictable. - circumstantial model: this type of model is the most complex to represent and to predict. It is related to events having attributes or relations which may change on rare occasions. The attributes related to the circumstantial model are subject to change with a specific and unpredictable event. In a maritime use case, the event could be the change of the captain, or further the purchase of a ship by another company. The similarity score is computed as follows: **The boundary error is computed as follows:** $\forall p\in(E_{i}\cup E_{j})$**,** $$Sim_{evol}(E_{i},E_{j})=\frac{\sum\left(dist(p_{E_{i}},p_{E_{j}}).\gamma_{p}\right)}{\sum(\gamma_{p})}\tag{2}$$ where γp denotes the confidence weight for each property p of an event. γp is the product of (a) the reliability of the sensor that collects information on p and (b) the evolution uncertainty model of p. γp is between 0 and 1. A weight of 1 is considered as a very reliable property and a weight of 0 means that we cannot trust this very uncertain property. ## 4 Performance Analysis 4.1 K-Pop Pipeline Performance Setup: To evaluate the information extraction pipeline, we focused on the proportion of information correctly extracted and aggregated from texts into a KB. In further detail, we computed a similarity score between a base populated by the evaluated system and the ground truth KB that we should obtain from a finite set of texts. For this purpose, we tested two scenarios, a *Warm-start* scenario which consists in populating an existing base and a Cold-start scenario in which we build a KB from scratch. To this end, we used the DWIE (Zaporojets et al., 2021) dataset. This dataset consists of 800 press articles in English, written and published by Deutsche-Welle. The textual level annotations of entities, their relations, their types and a unique identifier per entity at the inter-textual level allowed us to evaluate and compare our pipeline with the model proposed by (Zaporojets et al., 2021). The pipeline has been adapted to the ontology associated with the dataset and trained on the first 700 texts that constitute the train set. To measure a similarity score, we first align entities between the two KBs using the proportion of elements in common. The Hungarian algorithm (Kuhn, 1955) is then used to optimize this alignment, thus maximizing the average F1-score. Since the model introduced by DWIE does not solve the entity resolution task, we use the same solution as the one in our pipeline. Results The results in the table 1 illustrate the better performances compared with the DWIE model. Our K-POP pipeline shows up to a 2% improvement over the DWIE model in the *Warm-start* scenario. This shows that additional information extracted by our system contributes to a better linking with the existing content. The difference in results between the two types of scenarios shows the difficulty of populating an KB. However, our model shows a better resilience due to the linking by context approach. | F1 | | | |-------|------------|------------| | Model | Cold-start | Warm-start | | KBP | 76.1 | 72.1 | | DWIE | 75.6 | 69.9 | Table 1: F1 scores on the DWIE dataset. Although there is still room for improvement, even more so in the case of the *Warm-start* scenario which shows the difficulty of populating an existing base, our IE (Information Extraction) solution can be considered for semi-automatic population. ## 4.2 Event Consistency Check The aim is to ensure that the information extracted from the new event are coherent with those in the KB. If there is a contradiction with stored information in the KB, then an alert is raised. Figure 3 shows two events about the *Stena Impero* seizure in 2019. These events are extracted from two dif- $\textsf{\textsf{olutions quarry Guards three}}$ ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Figure 3: Example of two incoherent events. ferent newspapers and they contain non-matching (incoherent) information. They are represented in the intelligent Knowledge Base as in figure 4. The two events are compared based on the similarity score described in section 3.3. Weights and threshold used for the determination of the similarity score are application-dependent; they rely on the sensibility of the end-user, since the results may vary depending on optimistic or ![5_image_0.png](5_image_0.png) pessimistic assessment choices. They are currently defined based on end users application needs but can be further extended to allow for automated learning of these parameters. Note that the constitution of a training dataset for such a specific, unbalanced, high-risk problem is highly non-trivial. In this case, the new event holds disinformation since similarity score is -0.6 between these two events. ## 4.3 Evolution Models In The Real World As an example, a newspaper may relate the following event, which UMBAR will need to compare with the existing events in the KB: "On Saturday Stena Impero tanker had collided with a fishing boat, the Konarak, on its route." Extracted information from this event are in the table 2. We notice that there is a date -19 July | Event | Stena Impero tanker had collided with a fishing boat, the Konarak, on its route. | |------------------|----------------------| | Equipment | Stena Impero Konarak | | Unit | - | | Loc. | Bandar Abbas | | Nature of event: | Collision | | Date: | 19 july 2019 | 2019- deduced from the publication date of the article containing the event. This event conflicts with an event already present in the KB, indicating that *The konarak is moored* in Turkey on the 13th of July. Considering that this last event is correct, evolution models are used to perform the coherence check. ![5_image_1.png](5_image_1.png) According to predictable models, a reachable area in five days (from 13 to 19 July) from Turkey is computed. The ship cannot be at the port of Bandar Abbas in Iran such a short time. While these elements of evaluation are not provided as statistical measures, the specificity of the domain (high security risk along a low number of positive samples) makes it appropriate to evaluate the evolution models with an operational perspective. ## 5 Expert Opinion And Maritime Security Qualitative analyses on maritime security use cases are still on progress. For production-grade real word applications, the extracted information from AIS messages as presented in the figure 1 will be processed so as to check the coherence of information over time and to raise alerts in case suspect information is spotted. Once combined, the aforementioned methods present features allowing to deal with actual data and information coming from heterogeneous sources, on a massive scale. It encompasses techniques to better correlate and assess information with different timings, from both texts and sensor data domains, so that trusted KBs can be populated with a user-defined reliability. Obviously, human intervention cannot be eliminated, but adding this process to ensure maritime security will help a lot, and will increase the performance of detecting false information or attempts to manipulate information. ## 6 Conclusion This paper allowed us to present our semiautomatic end-to-end processing chain for information extraction and misinformation detection applied to a maritime surveillance use case. Maritime security being a central sector susceptible to false information leading to disastrous consequences. Although there is still room for improvement, our information extraction system and inconsistency detection provide support and alleviate the task of the operational staff in charge of monitoring. Future work will focus on improving the KBP pipeline to move towards fully automatic extraction, conducting an evaluation and further study on the detection of erroneous information. The use of UMBAR in a representative setting is planned in order to evaluate the system and qualify it for its future operational deployment. ## Limitations Building this system was nothing trivial. In our understanding the main challenges where to obtain data access, to chain very specialised artificial intelligence models, and to handle the iterations between the (machine) knowledge model, the customer expertise and the algorithms. We detail each of these challenges herein. ## Access To Annotated Data Piracy and AIS spoofing are still too frequent, even though not frequent enough so as to result in the availability of datasets to train and evaluate an automatic system. The proposed approach mainly relies on subtask evaluation (notably on the information extraction steps). The Coherence Check is fully parameterizable in order to choose a sensibility to all possible variations. A stream of work concerning the automatic/statistic evaluation of the full pipeline is still going-on. ## Hyper-Specialized Ais Most of the substasks here are instantiated by trained modules, which inherently contain an adherence to the ontology used for labelling the training dataset. Information extraction from texts were fine-tuned for short pieces of news, and limited to English. This cuts off numerous relevant sources of information, typically from local newspapers anywhere on Earth. ## Handling Business, Ontology And Algorithms Together The trend to fully automatize screening processes seems intuitive for many data scientists, but is actually not desirable for a security point of view: first, because the targeted elements are "black swans" which occur far too little in the training datasets, and more often than not, do not appear twice. Moreover, having too much confidence in the machine is clearly identified as a security risk, among other AI-system biases(Rastogi et al., 2020). Instead, the desired system should help the operator to handle more data about more incoming ships, and enabling them to focus on what is determining. ## Ethics Statement Developing AI for security purposes always come with its ethical considerations. In this case, the system performs law enforcement and fight against piracy, which are commonly assessed as noble, ethical deeds. As the application specifically targets maritime trafficking, the risk of misuse is reduced (i.e. it cannot be used to target individuals). This system relies on third party sources of data. As a consequence, the data processing roles are clearly and contractually established between the providers and the customer of this system, decreasing privacy risks. No personal data is required by the system; personal *public* data may be handled from the press and from the AIS information (typically, the name of the captain). The final result of the system is to gather and aggregate a complete picture of the risk level of a ship, to help an operator. The system may be used to prioritize the effort to review the documents and cargo of a ship, but cannot be used to authorize or forbid a ship's entry - this remains the decision of the operator. ## References Mohamed Al-Badrashiny, Jason Bolton, Arun Tejasvi Chaganty, Kevin Clark, Craig Harman, Lifu Huang, Matthew Lamm, Jinhao Lei, Di Lu, Xiaoman Pan, et al. 2017. Tinkerbell: Cross-lingual cold-start knowledge base construction. In TAC. Houssein Alaeddine and Cyril Ray. 2022. A hybrid artificial intelligence system for securing a maritime zone based on historical and real-time data analysis. In *OCEANS 2022, Hampton Roads*, pages 1–8. IEEE. Marco Balduzzi, Alessandro Pasta, and Kyle Wilhoit. 2014. A security evaluation of ais automated identification system. pages 436–445. Rajarshi Bhowmik and Gerard de Melo. 2020. A joint framework for inductive representation learning and explainable reasoning in knowledge graphs. *CoRR*, abs/2005.00637. Ziyang Chen, Xiang Zhao, Jinzhi Liao, Xinyi Li, and Evangelos Kanoulas. 2022. Temporal knowledge graph question answering via subgraph reasoning. Knowledge-Based Systems, 251:109134. Claudia d'Amato, Pierpaolo Masella, and Nicola Fanizzi. 2022. An approach based on semantic similarity to explaining link predictions on knowledge graphs. In IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, WI-IAT '21, page 170–177, New York, NY, USA. Association for Computing Machinery. Vladimir Dobrovolskii. 2021. Word-level coreference resolution. *arXiv preprint arXiv:2109.04127*. Jacques Everwyn, Bruno Zanuttini, Abdel-Illah Mouaddib, Sylvain Gatepaille, and Stephan Brunessaux. 2019. Achieving maritime situational awareness using knowledge graphs: a study. In *1st Maritime* Situational Awareness Workshop (MSAW 2019). Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10:178–206. Matthew A. Jaro. 1989. Advances in record-linkage methodology as applied to matching the 1985 census of tampa, florida. *Journal of the American Statistical* Association, 84(406):414–420. Harold W Kuhn. 1955. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83–97. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Maxime Prieur., Cédric Mouza., Guillaume Gadek., and Bruno Grilheres. 2023. Evaluating and improving end-to-end systems for knowledge base population. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART,, pages 641–649. INSTICC, SciTePress. Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R Varshney, Amit Dhurandhar, and Richard Tomsett. 2020. Deciding fast and slow: The role of cognitive biases in ai-assisted decision-making. *arXiv preprint* arXiv:2010.07938. Noureddine Seddari, Abdelouahid Derhab, Mohamed Belaoued, Waleed Halboob, Jalal Al-Muhtadi, and Abdelghani Bouras. 2022. A hybrid linguistic and knowledge-based analysis approach for fake news detection on social media. *IEEE Access*, 10:62097– 62109. Kilian Vasnier, Sylvain Gatepaille, and Valerian Justine. 2022. Method and system for merging information. Patent No. US20220374464A1. https://patents. google.com/patent/US20220374464A1. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. William Winkler. 1990. String comparator metrics and enhanced decision rules in the fellegi-sunter model of record linkage. *Proceedings of the Section on Survey* Research Methods. Klim Zaporojets, Johannes Deleu, Chris Develder, and Thomas Demeester. 2021. Dwie: An entity-centric dataset for multi-task document-level information extraction. *Information Processing & Management*, 58(4):102563. Jiasheng Zhang, Shuang Liang, Yongpan Sheng, and Jie Shao. 2022. Temporal knowledge graph representation learning with local and global evolutions. Knowledge-Based Systems, 251:109234. Wen Zhang, Bibek Paudel, Wei Zhang, Abraham Bernstein, and Huajun Chen. 2019. Interaction embeddings for prediction and explanation in knowledge graphs. *CoRR*, abs/1903.04750. Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pages 14612–14620.
kamalloo-etal-2023-evaluating-embedding
Evaluating Embedding {API}s for Information Retrieval
https://aclanthology.org/2023.acl-industry.50
The ever-increasing size of language models curtails their widespread access to the community, thereby galvanizing many companies and startups into offering access to large language models through APIs. One particular API, suitable for dense retrieval, is the semantic embedding API that builds vector representations of a given text. With a growing number of APIs at our disposal, in this paper, our goal is to analyze semantic embedding APIs in realistic retrieval scenarios in order to assist practitioners and researchers in finding suitable services according to their needs. Specifically, we wish to investigate the capabilities of existing APIs on domain generalization and multilingual retrieval. For this purpose, we evaluate the embedding APIs on two standard benchmarks, BEIR, and MIRACL. We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective on English, in contrast to the standard practice, i.e., employing them as first-stage retrievers. For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best albeit at a higher cost. We hope our work lays the groundwork for thoroughly evaluating APIs that are critical in search and more broadly, in information retrieval.
# Evaluating Embedding Apis For Information Retrieval Ehsan Kamalloo† Xinyu Zhang† Odunayo Ogundepo† **Nandan Thakur**† David Alfonso-Hermelo§ Mehdi Rezagholizadeh§ **Jimmy Lin**† † David R. Cheriton School of Computer Science, University of Waterloo § Huawei Noah's Ark Lab ekamalloo@uwaterloo.ca ## Abstract The ever-increasing size of language models curtails their widespread availability to the community, thereby galvanizing many companies into offering access to large language models through APIs. One particular type, suitable for dense retrieval, is a semantic embedding service that builds vector representations of input text. With a growing number of publicly available APIs, our goal in this paper is to analyze existing offerings in realistic retrieval scenarios, to assist practitioners and researchers in finding suitable services according to their needs. Specifically, we investigate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval. For this purpose, we evaluate these services on two standard benchmarks, BEIR and MIRACL. We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English, in contrast to the standard practice of employing them as first-stage retrievers. For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost. We hope our work lays the groundwork for evaluating semantic embedding APIs that are critical in search and more broadly, for information access. ## 1 Introduction Language models (LMs), pre-trained on a massive amount of text, power dense retrieval models in ad hoc retrieval (Lin et al., 2021b). Dense retrievers (Lee et al. 2019; Karpukhin et al. 2020; Xiong et al. 2021; Khattab and Zaharia 2020; Hofstätter et al. 2021; Izacard et al. 2022; *inter alia*) essentially measure relevance via similarity between the representations of documents and queries. As LMs are rapidly scaling up to gigantic models (Radford et al. 2019; Brown et al. 2020; Lieber et al. 2021; Chowdhery et al. 2022; Smith et al. 2022, *inter alia*), their use as the backbone of dense retrieval models has become limited primarily because large language models (LLMs) are computationally expensive and deploying them on commodity hardware is cumbersome and often impractical. To alleviate this problem, many companies, e.g., OpenAI, and Cohere, set out to offer access to their proprietary LLMs through a family of APIs. For dense retrieval, semantic embedding APIs are designed to provide LLM representations for queries as well as documents. These APIs are especially appealing in the IR ecosystem because they afford practitioners and researchers the benefit of scale and allow for wider outreach of LLMs in IR. However, although nowadays, the surge of companies offering such APIs with various model sizes has given us more options, a lack of thorough analysis of these APIs has made it more difficult to determine one's best option for a particular use-case. Besides, LLM-based APIs are often expensive and experimenting with all of them to determine the most suitable is prohibitively costly. In this paper, we analyze embedding APIs for various realistic scenarios in ad hoc retrieval. To this end, we select three embedding APIs available on the market, i.e., OpenAI, Cohere, and AlephAlpha, and assess their usability and effectiveness on two crucial directions that stand at the core of most IR applications. First, we study domain generalization where retrieval is conducted over collections drawn from a broad range of domains. Understanding for which domains embedding APIs work well or poorly elucidates their limitations while setting the stage for their wide adoption in their successful domains. We leverage the widely adopted BEIR benchmark (Thakur et al., 2021) for this purpose. On BEIR, we use the APIs as re-rankers on top of BM25 retrieved documents because the large size of document collections in BEIR makes full-ranking (i.e., first-stage retrieval) via the APIs impractical. Our results show that embedding APIs are reasonably effective re-rankers in most domains, suggesting 518 that re-ranking is not only budget-friendly, but also is effective. However, we find that on datasets collected via lexical matching, they struggle. In particular, BM25 outperforms the full-fledged embedding APIs on BioASQ (bio-medical retrieval) and Signal1M (tweet retrieval). We also explore the capabilities of embedding APIs in multilingual retrieval where they are tested in several non-English languages, ranging from low-resource to high-resource languages. More precisely, we use MIRACL (Zhang et al., 2022), a large-scale multilingual retrieval benchmark that spans 18 diverse languages. The manageable size of the corpora allow us to evaluate the APIs as full-rankers as well as re-rankers. We find that the winning recipe for non-English retrieval is not re-ranking, unlike retrieval on English documents. Instead, building hybrid models with BM25 yields the best results. Our experiments also indicate that the APIs are powerful for low-resource languages, whereas on high-resource languages, open-source models work better. Overall, our findings offer insights on using embedding APIs in real-world scenarios through two crucial aspects of IR systems. In summary, our key contributions are: - We extensively review the usability of commercial embedding APIs for realistic IR applications involving domain generalization and multilingual retrieval. - We provide insights on how to effectively use these APIs in practice. We hope our work lays the groundwork for thoroughly evaluating APIs that are critical in search and more broadly, for information access. ## 2 Related Work Sentence Embeddings. Numerous studies have attempted to build universal representations of sentences using supervision via convolutional neural networks (Kalchbrenner et al., 2014), recurrent neural networks (Conneau et al., 2017), or Transformers (Cer et al., 2018). Other approaches learn sentence embeddings in a self-supervised fashion (Kiros et al., 2015) or in an unsupervised manner (Zhang et al., 2020; Li et al., 2020). Recent techniques frame the task as a contrastive learning problem (Reimers and Gurevych, 2019; Li et al., 2020; Gao et al., 2021; Kim et al., 2021; Ni et al., 2022). Embedding APIs largely follow a similar strategy to generate sentence representations (Neelakantan et al., 2022). Dense Retrieval. While the paradigm has been around for a long time (Yih et al., 2011), the emergence of pre-trained LMs brought dense retrieval (Lee et al., 2019; Karpukhin et al., 2020) to the mainstream in IR. Recent dense retrieval models adopt a bi-encoder architecture and generally use contrastive learning to distinguish relevant documents from non-relevant ones (Lin et al., 2021b), similar to sentence embedding models. LMs are shown to be an effective source to extract representations (Karpukhin et al., 2020; Xiong et al., 2021; Hofstätter et al., 2021; Khattab and Zaharia, 2020; Izacard and Grave, 2021; Izacard et al., 2022). This essentially means that with LMs as the backbones and analogous objectives, dense retrievers and sentence embedding models have become indistinguishable in practice. ## 3 Apis Semantic embedding APIs are generally based on the so-called *bi-encoder* architecture, where queries and documents are fed to a fine-tuned LM in parallel (Seo et al., 2018; Karpukhin et al., 2020; Reimers and Gurevych, 2019). The key ingredient of bi-encoders is contrastive learning, whose objective is to enable models to distinguish relevant documents from non-relevant ones. In our experiments, we adopt the following semantic embedding APIs, presented alphabetically:1 Aleph-Alpha: This company has trained a family of multilingual LMs, named luminous, 2 with three flavours in size, base (13B), extended (30B), and supreme (70B). The luminous models support five high-resource languages: English, French, German, Italian, and Spanish. However, no information is available about the data on which these LMs are trained. We used luminousbase that projects text into 5120-dimension embedding vectors. Cohere: This company offers LMs for producing semantic representations in two sizes: small (410M) and large (6B), generating 1024dimension and 4096-dimension embedding vectors, 1Information about the number of parameters is obtained from https://crfm.stanford.edu/helm/latest/, accessed on May 15, 2023. 2https://docs.aleph-alpha.com/docs/introduction/ luminous/ respectively. Models are accompanied by model cards (Mitchell et al., 2019).3 Cohere also provides a multilingual model, multilingual-22-12, 4that is trained on a large multilingual collection comprising 100+ languages. The data consists of 1.4 billion question/answer pairs mined from the web. The multilingual model maps text into 768dimension embedding vectors. OpenAI: The company behind the GPT models (Radford et al., 2019; Brown et al., 2020; Ouyang et al., 2022) also offers an embedding service. We use the recommended second-generation model, text-embedding-ada-002 (Neelakantan et al., 2022) that embeds text into a vector of 1536 dimensions. The model, initialized from a pretrained GPT model, is fine-tuned on naturally occurring paired data with no explicit labels, mainly scraped from the web, using contrastive learning with in-batch negatives. All the APIs described above use Transformerbased language models (Vaswani et al., 2017), but differ from each other in various ways: - **Model architecture:** The companies built their models in different sizes, with differences in the number of hidden layers, number of attention heads, the dimension of output layers, etc. Other subtle differences in the Transformer architecture are also likely, e.g., where to apply layer normalization in a Transformer layer (Xiong et al., 2020). Additional differences lie in the vocabulary because of different tokenization methods in the pre-trained LM that was used to initialize these embedding models for fine-tuning. - **Training:** While contrastive learning is at the core of these models, they may vary substantially in details, e.g., the contrastive learning objective and negative sampling strategies. Also, the choice of hyper-parameters such as the number of training steps, learning rate, and optimizer is another key difference. - **Data:** Chief among the differences is the data on which the embedding models are trained. As OpenAI and Cohere state in their documentation, the data is mostly mined from the web, but the details of the data curation process remain largely unknown. In addition, considering that 3https://docs.cohere.ai/docs/representation-card 4https://txt.cohere.ai/multilingual/ each company has its own models, differences in pre-training corpora form yet another important variable in the complex process of building embedding APIs. These distinctions may potentially lead to substantial differences in the overall effectiveness of the embedding APIs. Nevertheless, due to the nondisclosure of several details by the API providers, it remains challenging to identify the specific factors that contribute to the strengths and weaknesses of embedding models. Yet, as the number of such APIs continues to grow, we believe that high-level comparisons on standard benchmarks can provide valuable insights into how well these models operate under various practical scenarios. For practitioners building systems on top of these APIs, this comparative analysis is useful as they are primarily interested in the end-to-end effectiveness and performance of these APIs and are not typically concerned with their minutiae. ## 3.1 Usability One of the critical advantages of using embedding APIs is their ease-of-use. For IR applications, even running an LLM to encode large document collections requires hefty resources, let alone training retrieval models. Thus, the emergence of such APIs makes LLMs more accessible to the community and paves the way for faster development of IR systems—this is most definitely a positive development. However, these advantages rest on the usability of the APIs. In this section, we briefly overview some factors that affect the usability of embedding APIs. Setup. Basic information on how users can set up the proper environment to use the embedding APIs is the first step. All three companies provide detailed introductory documentation for this purpose. The procedure is nearly identical for all three at a high level: users need to create an account and generate an API key for authentication. The companies also furnish web interfaces that enable users to monitor their usage history and available credit, in addition to configuring limits to prevent unintended charges. Client libraries. All three companies have developed open-source client libraries to facilitate access to the APIs. OpenAI provides libraries for Python and Node.js; there are also unofficial community libraries for other programming languages. Cohere offers development toolkits in Python, Node.js, and Go. Aleph-Alpha provides a library in Python. All libraries are structured in a similar way. One difference we notice is that Cohere has a text truncation feature when the input text exceeds the API's input length limit. OpenAI and Aleph-Alpha raise an error in this case, meaning that API users need to implement additional checks to avoid such exceptions. On the other hand, Cohere's API can truncate text from the left or the right, and can also provide an average embedding for long texts up to 4096 tokens by averaging over 512-token spans. Documentation. All three companies provide a technical API reference, explaining inputs, responses, and errors of their APIs. Additionally, all companies provide tutorials and blog posts with examples on how to use their client libraries. Latency. The APIs are all offered with a liberal rate limit, i.e., OpenAI at 3K requests per minute, and Cohere at 10K requests per minute.5 We find that API calls are mostly reliable and request service errors are scattershot. Each API call takes up to roughly 400ms, consistent across all three companies (at least at the time of our experiments). However, latency presumably depends on the server workload and other factors because we observe variability at different points in time. We also find that latency depends on the input length, as computing embeddings for queries is generally faster than computing embeddings for documents (as expected). Finally, we appreciate that Cohere's and OpenAI's APIs support bulk calls of up to 96 and 2048 texts per call, respectively, whereas for Aleph-Alpha, only one text can be passed in each API call. This bulk call feature considerably speeds up encoding document collections. Cost. Our analysis is based on information reported as of Feb 1, 2023. OpenAI and Aleph-Alpha charge based on the number of tokens and model size: ada2 and luminousbase cost $0.0004 USD and e0.078 ≈ $0.0866 per 1,000 tokens. On the other hand, Cohere follows a simpler cost structure, charging based only on the number of API calls, i.e., $1.00 USD per 1,000 calls. Our re-ranking experiments on BEIR cost around $170 USD on OpenAI, whereas it would cost roughly $2,500 USD on Cohere based on their quoted prices. The 5We were not able to find the rate limits for Aleph-Alpha. 6e1.00 ∼ $1.10 as of Feb 1, 2023 cost of our re-ranking experiments on MIRACL for three languages (German, Spanish, and French) hovers around e116 ≈ $128 using Aleph-Alpha and Cohere. Cohere offers a free-tier access with a restricted API call rate limit of 100 calls per minute, which we opted for, albeit sacrificing speed. ## 4 Experiments In this section, our main goal is to evaluate embedding APIs in two real-world scenarios that often arise in IR applications: domain generalization and multilingual retrieval. ## 4.1 Beir We first evaluate the generalization capabilities of embedding APIs across a variety of domains. To this end, we measure their effectiveness on BEIR (Thakur et al., 2021), a heterogeneous evaluation benchmark intended to gauge the domain generalization of retrieval models. BEIR consists of 18 retrieval datasets across 9 domains and Thakur et al. (2021) showed that BM25 is a strong baseline, surpassing most dense retrieval models. We adopt the embedding API as a re-ranking component on top of BM25 retrieved results. Reranking is a more realistic scenario, compared to full ranking, because the number of documents to encode in re-ranking is commensurate with the number of test queries, which is orders of magnitude smaller than the collection size, usually comprising millions of documents. Thus, re-ranking is more efficient and cheaper than full ranking. For the BM25 retrieval, we use Anserini (Yang et al., 2018) to index the corpora in BEIR and retrieve top-100 passages for each dataset. Then, the queries and the retrieved passages are encoded using the embedding APIs. We reorder the retrieval output based on the similarity between query embeddings and those of the passages. In addition to BM25, our baselines include the following dense retrieval models: - TASB (Hofstätter et al., 2021), a prominent dense retrieval model that leverages topic-aware sampling of queries during training to construct more informative and more balanced contrastive examples in terms of sample difficulty. TASB is built on DistilBERT (Sanh et al., 2019) and is fine-tuned on MS MARCO (Bajaj et al., 2018). - cpt (Neelakantan et al., 2022), an earlier version of OpenAI's embedding service. | Task | Domain | Full-ranking | BM25 Top-100 Re-rank | | | | | | |---------------|-------------|----------------|------------------------|-------------|-------------|------------|-------|-------| | BM25 | TASB | cpt-S | TASB | Coherelarge | Coheresmall | OpenAIada2 | | | | TREC-COVID | Bio-Medical | 0.595 | 0.319 | 0.679 | 0.728 | 0.801 | 0.776 | 0.813 | | BioASQ | Bio-Medical | 0.523 | 0.481 | - | 0.467 | 0.419 | 0.423 | 0.491 | | NFCorpus | Bio-Medical | 0.322 | 0.360 | 0.332 | 0.334 | 0.347 | 0.324 | 0.358 | | NQ | Wikipedia | 0.306 | 0.463 | - | 0.452 | 0.491 | 0.453 | 0.482 | | HotpotQA | Wikipedia | 0.633 | 0.584 | 0.594 | 0.628 | 0.580 | 0.523 | 0.654 | | FiQA-2018 | Finance | 0.236 | 0.300 | 0.384 | 0.308 | 0.411 | 0.374 | 0.411 | | Signal-1M | Twitter | 0.330 | 0.288 | - | 0.329 | 0.306 | 0.295 | 0.329 | | TREC-NEWS | News | 0.395 | 0.377 | - | 0.436 | 0.461 | 0.447 | 0.495 | | Robust04 | News | 0.407 | 0.428 | - | 0.399 | 0.489 | 0.467 | 0.509 | | ArguAna | Misc. | 0.397 | 0.427 | 0.470 | 0.436 | 0.467 | 0.438 | 0.567 | | Tóuche-2020 | Misc. | 0.442 | 0.163 | 0.285 | 0.292 | 0.276 | 0.275 | 0.280 | | CQADupStack | StackEx. | 0.302 | 0.314 | - | 0.324 | 0.411 | 0.384 | 0.391 | | Quora | Quora | 0.789 | 0.835 | 0.706 | 0.841 | 0.886 | 0.866 | 0.876 | | DBPedia | Wikipedia | 0.318 | 0.384 | 0.362 | 0.389 | 0.372 | 0.344 | 0.402 | | SCIDOCS | Scientific | 0.149 | 0.149 | - | 0.156 | 0.194 | 0.182 | 0.186 | | FEVER | Wikipedia | 0.651 | 0.700 | 0.721 | 0.728 | 0.674 | 0.617 | 0.773 | | Climate-FEVER | Wikipedia | 0.165 | 0.228 | 0.185 | 0.243 | 0.259 | 0.246 | 0.237 | | SciFact | Scientific | 0.679 | 0.643 | 0.672 | 0.661 | 0.721 | 0.670 | 0.736 | | Avg. nDCG@10 | 0.424 | 0.414 | - | 0.453 | 0.476 | 0.450 | 0.500 | | The results are presented in Table 1. 7 TASB reranking results show a +4% increase over TASB full-ranking on average, showing that re-ranking via bi-encoder models is indeed a viable method. We observe that OpenAI's ada2 is the most effective model, surpassing TASB and Coherelarge by +4.7% and +2.4% on average, respectively. However, Coherelarge outperforms ada2 on 5 tasks. Specifically, Coherelarge achieves the highest nDCG@10 on NQ (question answering), SCIDOCS (citation prediction), Climate-FEVER (fact verification), and both duplicate question retrieval tasks, i.e., CQADupStack, and Quora. Also, we observe that Coheresmall trails Coherelarge by 2.6% on average and is nearly on par with TASB. Finally, an interesting observation is that BM25 leads all other models on 3 tasks: BioASQ, Signal1M, and Tóuche-2020. These are datasets collected based on lexical matching, suggesting that embedding APIs struggle in finding lexical overlaps. ## 4.2 Multilingual Retrieval: Miracl We further assess the embedding APIs in the multilingual retrieval setting, where the aim is to build retrieval models that can operate in several languages while maintaining their retrieval effectiveness across languages. For this purpose, we use MIRACL (Zhang et al., 2022), a large-scale mul-7We did not test Aleph-Alpha's luminous on BEIR due to budget constraints. tilingual retrieval benchmark that spans 18 languages with more than 725K relevance judgments collected from native speakers. We test Cohere's multilingual model as well as Aleph-Alpha's luminous on MIRACL. OpenAI does not recommend using their embeddings service for non-English documents and thus their API was omitted from this experiment. Analogous to the previous experiment, we adopt a re-ranking strategy on top-100 passages retrieved by BM25. For Cohere, we carry out full-ranking retrieval to draw a comparison with first-stage retrieval models. We also construct a hybrid model combining BM25 and Cohere by interpolating their normalized retrieval scores, following Zhang et al. (2022). The baselines are also taken from that paper: BM25, mDPR, and the hybrid model mDPR+BM25. We reuse the indexes provided in Pyserini (Lin et al., 2021a) to generate the baseline runs. For all models, we measure nDCG@10 and Recall@100. The results on the MIRACL dev set are reported in Table 2. Re-ranking BM25 via Cohere yields better overall results (0.542), compared to full-ranking (0.512), which is consistent with our observation on BEIR. However, while the two re-ranking models, luminous and Cohere, surpass BM25 on all languages, they lag behind the full-ranking hybrid models. The results show that the winning recipe here is to build a hybrid model, i.e., first perform retrieval on the entire corpus and then combine the Full-ranking BM25 Top-100 Re-rank Language # Q BM25 mDPR mDPR+BM25 Cohere Cohere+BM25 Cohere luminousbase Arabic 2,896 0.481 0.499 0.673 0.617 **0.686** 0.667 - Bengali 411 0.508 0.443 0.654 0.594 **0.676** 0.634 - German 305 0.226 0.490 **0.565** 0.436 0.468 0.414 0.396 Spanish 648 0.319 0.478 **0.641** 0.233 0.349 0.507 0.482 Persian 632 0.333 0.480 **0.594** 0.471 0.520 0.484 - Finnish 1,271 0.551 0.472 0.672 0.634 **0.716** 0.675 - French 343 0.183 0.435 **0.523** 0.462 0.434 0.443 0.415 Hindi 350 0.458 0.383 0.616 0.493 **0.623** 0.573 - Indonesian 960 0.449 0.272 0.443 0.446 **0.565** 0.505 - Japanese 860 0.369 0.439 **0.576** 0.460 0.557 0.516 - Korean 213 0.419 0.419 **0.609** 0.496 0.597 0.546 - Russian 1,252 0.334 0.407 **0.532** 0.469 0.528 0.447 - Swahili 482 0.383 0.299 0.446 **0.611** 0.608 0.543 - Telugu 828 0.494 0.356 0.602 0.613 **0.686** 0.638 - Thai 733 0.484 0.358 0.599 0.546 **0.678** 0.606 - Yoruba 119 0.019 0.396 0.374 **0.762** 0.735 0.629 - Chinese 393 0.180 0.512 **0.526** 0.365 0.416 0.389 - Avg. nDCG@10 0.364 0.420 0.567 0.512 **0.579** 0.542 - ![5_image_0.png](5_image_0.png) results with BM25. In particular, Cohere+BM25 achieves the highest average nDCG@10, outperforming the other models on 7 languages. The second best model overall is the other hybrid model, mDPR+BM25, trailing Cohere+BM25 by −1.2%. We further investigate how the models perform on low-, medium-, and high-resource languages. To this end, following the categorization of Wu and Dredze (2020), we group languages into three categories based on the number of articles they contain in Wikipedia, reported in Zhang et al. (2022): low-resource (<200K), medium-resource (>200K but <600K), and high-resource (>600K). We measure the average nDCG@10 and Recall@100 for each language category. The results are visualized in Figure 1. The effectiveness of BM25 on low-resource languages is nearly on par with its effectiveness on high-resource languages. Interestingly, mDPR+BM25 consistently performs well across the three language categories. On the other hand, Cohere's hybrid and standalone models excel on low-resource languages and are competitive with mDPR+BM25 on medium-resource languages. However, on high-resource languages, mDPR+BM25 outperforms Cohere's hybrid model due in part to the prevalence of text in these languages during mBERT pre-training (Wu and Dredze, 2020) in mDPR. ## 5 Conclusion The incredible capabilities of Transformer-based language models at scale have attracted a handful of companies to offer access to their proprietary LLMs via APIs. In this paper, we aim to qualitatively and quantitatively examine semantic embedding APIs that can be used for information retrieval. Our primary focus is to assess existing APIs for domain generalization and multilingual retrieval. Our findings suggest that re-ranking BM25 results is a suitable and cost-effective option for English; on the BEIR benchmark, OpenAIada2 performs the best on average. In multilingual settings, while re-ranking remains a viable technique, a hybrid approach produces the most favorable results. We hope that our insights aid practitioners and researchers in selecting appropriate APIs based on their needs in this rapidly growing market. ## Limitations Similar to other commercial products, embedding APIs are subject to changes that could potentially impact their effectiveness, pricing, and usability. Thus, it is important to note that our findings are specific to the APIs accessed during January and February 2023. Nevertheless, we believe our evaluation framework can serve to thoroughly assess future releases of these APIs. Moreover, we limit our focus to the effectiveness and robustness of semantic embedding APIs. Nonetheless, safe deployment of retrieval systems for real-world applications necessitates the evaluation of their fairness as well as additional considerations. Despite their scale, language models have been found to learn, and sometimes perpetuate societal biases and harmful stereotypes ingrained in the training corpus (Bender et al., 2021). Consequently, it is crucial to assess potential biases in the embedding APIs with respect to protected and marginalized groups. This paper does not delve into this aspect of API evaluation and further research is required to examine these and other issues in real-world applications. ## Acknowledgements We thank Aleph-Alpha for providing us with credits to explore and test their APIs. This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. ## References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv preprint arXiv:1611.09268. Emily M. Bender, Timnit Gebru, Angelina McMillanMajor, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610–623. Association for Computing Machinery. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. *arXiv preprint* arXiv:2204.02311. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In *Proceedings of* the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 113–122. Association for Computing Machinery. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. *Transactions* on Machine Learning Research. Gautier Izacard and Edouard Grave. 2021. Distilling knowledge from reader to retriever for question answering. In International Conference on Learning Representations. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In *Proceedings of the 52nd Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655–665, Baltimore, Maryland. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over bert. In *Proceedings of* the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39–48. Association for Computing Machinery. Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for BERT sentence representations. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2528–2540, Online. Association for Computational Linguistics. Jamie Kiros, Yukun Zhu, Russ R. Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, volume 28, pages 3294–3302. Curran Associates, Inc. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119–9130, Online. Association for Computational Linguistics. Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. 2021. Jurassic-1: Technical details and evaluation. Technical report, AI21 Labs. Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, JhengHong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356–2362. Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2021b. Pretrained transformers for text ranking: BERT and beyond. *Synthesis Lectures on Human* Language Technologies, 14(4):1–325. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 220–229. Association for Computing Machinery. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by contrastive pre-training. arXiv preprint arXiv:2201.10005. Jianmo Ni, Gustavo Hernandez Abrego, Noah Constant, Ji Ma, Keith Hall, Daniel Cer, and Yinfei Yang. 2022. Sentence-t5: Scalable sentence encoders from pretrained text-to-text models. In *Findings of the Association for Computational Linguistics: ACL 2022*, pages 1864–1874, Dublin, Ireland. Association for Computational Linguistics. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In *Advances in Neural Information* Processing Systems, pages 27730–27744. Curran Associates, Inc. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In *Proceedings of the 2019 Conference on* Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. *arXiv* preprint arXiv:1910.01108. Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phraseindexed question answering: A new challenge for scalable document comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 559–564, Brussels, Belgium. Association for Computational Linguistics. Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. 2022. Using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, a large-scale generative language model. *arXiv preprint arXiv:2201.11990*. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In *Proceedings of* the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Curran Associates, Inc. Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In *Proceedings* of the 5th Workshop on Representation Learning for NLP, pages 120–130, Online. Association for Computational Linguistics. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. 2020. On layer normalization in the transformer architecture. In International Conference on Machine Learning, volume 119, pages 10524–10533. PMLR. Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible ranking baselines using Lucene. *Journal of Data and Information Quality (JDIQ)*, 10(4):1– 20. Wen-tau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In *Proceedings of the Fifteenth Conference on Computational* Natural Language Learning, pages 247–256, Portland, Oregon, USA. Association for Computational Linguistics. Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2022. Making a MIRACL: Multilingual information retrieval across a continuum of languages. *arXiv preprint arXiv:2210.09984*. Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. In *Proceedings of the 2020 Conference on* Empirical Methods in Natural Language Processing (EMNLP), pages 1601–1610, Online. Association for Computational Linguistics.
wojcik-etal-2023-domain
Domain-Agnostic Neural Architecture for Class Incremental Continual Learning in Document Processing Platform
https://aclanthology.org/2023.acl-industry.51
Production deployments in complex systems require ML architectures to be highly efficient and usable against multiple tasks. Particularly demanding are classification problems in which data arrives in a streaming fashion and each class is presented separately. Recent methods with stochastic gradient learning have been shown to struggle in such setups or have limitations like memory buffers, and being restricted to specific domains that disable its usage in real-world scenarios. For this reason, we present a fully differentiable architecture based on the Mixture of Experts model, that enables the training of high-performance classifiers when examples from each class are presented separately. We conducted exhaustive experiments that proved its applicability in various domains and ability to learn online in production environments. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods.
# Domain-Agnostic Neural Architecture For Class Incremental Continual Learning In Document Processing Platform Mateusz Wójcik1,2**, Witold Kosciukiewicz** ´ 1,2**, Mateusz Baran**1,2, Tomasz Kajdanowicz1**, Adam Gonczarek**2 1Wroclaw University of Science and Technology 2Alphamoon Ltd., Wrocław {mateusz.wojcik,tomasz.kajdanowicz}@pwr.edu.pl adam.gonczarek@alphamoon.ai ## Abstract ![0_Image_0.Png](0_Image_0.Png) Production deployments in complex systems require ML architectures to be highly efficient and usable against multiple tasks. Particularly demanding are classification problems in which data arrives in a streaming fashion and each class is presented separately. Recent methods with stochastic gradient learning have been shown to struggle in such setups or have limitations like memory buffers, and being restricted to specific domains that disable its usage in realworld scenarios. For this reason, we present a fully differentiable architecture based on the Mixture of Experts model, that enables the training of high-performance classifiers when examples from each class are presented separately. We conducted exhaustive experiments that proved its applicability in various domains and ability to learn online in production environments. The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods. ## 1 Introduction Solutions based on deep neural networks have already found their applications in almost every domain that can be automated. An essential part of them is NLP, the development of which has gained particular momentum with the beginning of the era of transformers (Vaswani et al., 2017). Complex and powerful models made it possible to solve problems such as text classification with a previously unattainable accuracy. However, exploiting the capabilities of such architectures in real-world systems requires online learning after deployment. This is especially difficult in dynamically changing environments that require the models to be frequently retrained due to domain or class setup shifts. An example of such environment is Alphamoon Workspace1 where the presented architecture will be deployed as a model for document 1https://alphamoon.ai/ classification since we noticed the emerging need for online learning. We observed that the users' data in document classification process is changing frequently and such shifts often decrease the model accuracy. As a result, we have to retrain the models manually ensuing a time-consuming process. Our goal was to design an effective approach to incremental learning that will be used in a continual learning module of our system (Figure 1). Recently, neural architectures have become effective and widely used in classification problems (Devlin et al., 2018; Rawat and Wang, 2017). The parameter optimization process based on gradient descent works well when the data set is sufficiently large and fully available during the training process. Otherwise, the catastrophic forgetting (French, 1999) may occur, which makes neural networks unable to be trained incrementally. Continual learning aims to develop methods that enable accumulating new knowledge without forgetting previously learnt one. In this paper, we present a domain-agnostic architecture for online class incremental continual learning called DE&E (Deep Encoders and Ensembles). Inspired by the E&E method (Shanahan et al., 2021), we proposed a method that increases its accuracy, provides full differentiability, and, most importantly, can effectively solve real-world classification problems in production environments. Our contribution is as follows: 1) we introduced a differentiable KNN layer (Xie et al., 2020) into the model architecture, 2) we proposed a novel approach to aggregate classifier predictions in the ensemble, 3) we performed exhaustive experiments showing the ability to learn incrementally and real-world usability, 4) we demonstrate the effectiveness of the proposed architecture by achieving SOTA results on various data sets without a memory buffer. ## 2 Related Work 2.1 Continual Learning 2.1.1 Methods Currently, methods with a memory buffer such as GEM (Lopez-Paz and Ranzato, 2017), A-GEM (Chaudhry et al., 2019a) or DER (Buzzega et al., 2020) usually achieve the highest performance in all continual learning scenarios (Mai et al., 2022). Such methods store part of the data in the memory and this data is successively replayed during training on new, unseen examples. However, the requirement to store data in memory disqualifies these methods in many practical applications due to privacy policies or data size (Salem et al., 2018). This forces attention toward other approaches, such as parameter regularization. The most popular methods in this group include EWC (Kirkpatrick et al., 2016) and LWF (Li and Hoiem, 2017). When receiving a new dose of knowledge, these methods attempt to influence the model parameter updating procedure to be minimally invasive. As research shows (Van de Ven and Tolias, 2019), regularization-based methods fail in class incremental scenarios making them ineffective in many real-world cases. ## 2.1.2 Approaches For Nlp Almost all prior works focus on the development of continual learning methods in the computer vision domain (Delange et al., 2021). Research on continual learning for NLP is limited and, as Biesialska et al. (2020) observed, the majority of current NLP methods are task-specific. Moreover, these methods often use a memory buffer (de Masson D'Autume et al., 2019) or relate to the language model itself (Ke et al., 2021). To address this niche, domain-agnostic approaches have to become much more prevalent in the near future. ## 2.2 Ensemble Methods Ensemble methods are widespread in the world of machine learning (Zhang and Ma, 2012). By using predictions of multiple weak learners, it is possible to get a model that performs surprisingly well overall. Broad adoption of methods (Cao et al., 2020; Li and Pan, 2022; Yang et al., 2021) demonstrates the effectiveness of ensemble techniques in a wide variety of tasks. Ensembles have also been used successfully in the field of continual learning, as evidenced by the BatchEnsemble (Wen et al., 2020) or CN-DPM (Lee et al., 2020). Other contributions present in literature (Doan et al., 2022) tend to focus strongly on improving model performance rather than increasing model efficiency. Furthermore, ensemble approaches can also be used indirectly through dropout (Srivastava et al., 2014) or weights aggregation (Wortsman et al., 2022). ## 2.3 Mixture Of Experts Mixture of Experts (ME) (Jacobs et al., 1991) is a technique based on the divide and conquer paradigm. It assumes dividing the problem space between several specialized models (experts). Experts are supervised by the gating network that selects them based on the defined strategy. The difference between the ensembles is that ME methods focus on selecting a few experts rather than combining predictions of all available models. ME techniques have found many applications in various domains (Masoudnia and Ebrahimpour, 2014), including continual learning (Shanahan et al., 2021), and even nowadays such approaches are widely used in NLP (Gao et al., 2022; Ravaut et al., 2022). ## 2.4 Real-World Nlp Systems Over the last few years, the amount of real-world NLP applications has grown rapidly (Sarker, 2022). Despite major successes in the real-world application of language technologies such as Google Translate, Amazon Alexa, and ChatGPT, production deployment and maintenance of such models still remain a challenge. Researchers have shown (Nowakowski et al., 2022; Karakanta et al., 2021), that there are several issues related to maintaining NLP models, including technical limitations, latency, and performance evaluation. However, the crucial problem is the shift of data domain that forces models to be retrained and deployed again over time (Hu et al., 2020). It is a major limitation in dynamically changing environments where users ![2_image_0.png](2_image_0.png) expect models to quickly adapt to them. Currently, this problem has been tackled in several systems (Afzal et al., 2019; Hancock et al., 2019), but many of the solutions preclude maintaining model accuracy when training incrementally making them insufficient. ## 3 Our Approach 3.1 Problem Formulation Class incremental continual learning involves training a classification model f(·) : X 7−→ Y on a sequence of T tasks. The model is trained on each task separately (one task at a time). Each task Dt contains data points Dt = {(x 1 t, y1 t)*, . . . ,*(x Nt t, yNt t)}, where Ntis length of Dt, x (i) t ∈ R D, and y (i) t ∈ Yt. Ytis a label set for task t and Yt ∩ Yt′ = ∅ for t ̸= t′. We want the model to keep performing well on all previous tasks after each update, and we assume to be working in the most challenging setup (Van de Ven and Tolias, 2019), where one task consists of data from one class. ## 3.2 Method We present a flexible and effective domain-agnostic architecture that can be used to solve various classification problems. The architecture is presented in Figure 2. Feature extractor. The first component of the proposed architecture is a multi-layer feature extractor that transforms input data into the embedding space. It can be described by the following mapping z = F(x), where x ∈ R D is an input example and z ∈ RM is a M-dimensional embedding. The approach we follow assumes the use of a pre-trained model with frozen parameters. Such a procedure makes it possible to completely prevent the extractor from forgetting knowledge by isolating feature space learning from the classification process. Keys and classifiers. We use an ensemble of N classifiers fn(·), where each of them maps the embedding into a K-dimensional output vector ˆyn = fn(z). With each classifier, there is an associated key vector kn ∈ RM with the same dimensionality as the embedding. The keys help to select the most suitable models for specialization with respect to the currently processed input example. They are initialized randomly from normal distribution. We use simple single-layer neural networks as classifiers, with fan-in variance scaling as the weight initialization strategy. The network output is activated by a hyperbolic tangent function (*tanh*). Soft κ**-nearest neighbors layer.** The standard KNN algorithm is often implemented using ordinary sorting operations that make it impossible to determine the partial derivatives with respect to the input. It removes the ability to use KNN as part of end-to-end neural models. However, it is possible to obtain a differentiable approximation of the KNN model by solving the Optimal Transport Problem (Peyré et al., 2019). Based on this concept, we add a differentiable layer to the model architecture. We call this layer soft κ-nearest neighbors (*soft KNN*). In order to determine the KNN approximation, we first compute a cosine distance vector c ∈ R N between the embedding and the keys: $$c_{n}=1-\cos(\mathbf{z},\mathbf{k}_{n}),$$ cn = 1 − cos(z, kn), (1) where cos(·, ·) denotes the cosine similarity. Next, we follow the idea of a soft top-κ operator presented in (Xie et al., 2020), where κ denotes the number of nearest neighbors. Let E ∈ R N×2 be the Euclidean distance matrix with the following elements: $$e_{n,0}=(c_{n})^{2},\ \ e_{n,1}=(c_{n}-1)^{2}.$$ And let G ∈ R N×2 denote the similarity matrix obtained by applying the Gaussian kernel to E: $$\mathbf{G}=\exp(-\mathbf{E}/\sigma),$$ G = exp(−E/σ), (3) where σ denotes the kernel width. The exp operators are applied elementwise to the matrix E. We then use the Bregman method, an algorithm designed to solve convex constraint optimization problems, to compute L iterations of Bregman projections in order to approximate their stationary points: $$\mathbf{p}^{(l+1)}=\frac{\mu}{\mathbf{G}\mathbf{q}^{(l)}},\quad\mathbf{q}^{(l+1)}=\frac{\nu}{\mathbf{G}^{\top}\mathbf{p}^{(l+1)}},\tag{4}$$ where $l=0,\ldots,L-1,\mu=1N/N$, $\nu=[\kappa/N,(N-\kappa)/N]^{\top},\mathbf{q}^{(0)}=1_{2}/2$, and $1_{i}$ denotes [*κ/N,*(N − κ)/N]⊤, q the i-element all-ones vector. Finally, let Γ denotes the optimal transport plan matrix and is given by: $$\Gamma=\mathrm{diag}(\mathbf{p}^{(L)})\cdot\mathbf{G}\cdot\mathrm{diag}(\mathbf{q}^{(L)})$$ (L)) (5) As the final result γ ∈ R N of the soft κ-nearest neighbor operator, we take the second column of Γ multiplied by N i.e. γ = NΓ:,2. γ is a soft approximation of a zero-one vector that indicates which κ out of N instances are the nearest neighbors. Introducing the *soft KNN* enables to train parts of the model that were frozen until now. Voting layer. We use both cn and γ to weight the predictions by giving the higher impact for classifiers with keys similar to extracted features. The obtained approximation γ has two main functionalities. It eliminates the predictions from classifiers Table 1: Data sets setup for experiments. | Domain | Data set | Classes | Train | Test | Avg. words | |------------|-----------------|-----------|---------|--------|--------------| | BBC News | 5 | 1,668 | 557 | 380 | | | Text | Newsgroups | 10 | 11314 | 7532 | 315 | | Complaints | 10 | 16,000 | 4,000 | 228 | | | Audio | Speech Commands | 10 | 18,538 | 2,567 | - | | Image | MNIST | 10 | 60,000 | 10,000 | - | | CIFAR-10 | 10 | 50,000 | 10,000 | - | | $$(1)$$ outside κ nearest neighbors and weights the result. Since the Bregman method does not always completely converge, the vector κ contains continuous values that are close to 1 for the most relevant classifiers. We make use of this property during the ensemble voting procedure. The higher the κ value for a single classifier, the higher its contribution toward the final ensemble decision. The final prediction is obtained as follows: $${\hat{\mathbf{y}}}={\frac{\sum_{n=1}^{N}\gamma_{n}c_{n}{\hat{\mathbf{y}}}_{n}}{\sum_{n=1}^{N}c_{n}}}\qquad\qquad({\boldsymbol{6}})$$ $$(2)$$ Training To effectively optimize the model parameters, we follow the training procedure presented in (Shanahan et al., 2021). It assumes the use of a specific loss function that is the inner product between the ensemble prediction and the onehot coded label: $${\mathcal{L}}(\mathbf{y},{\hat{\mathbf{y}}})=-\mathbf{y}^{\top}{\hat{\mathbf{y}}}$$ $$\left(7\right)$$ ⊤yˆ (7) $$\quad(4)$$ Optimizing this criterion yields an advantage of using a *tanh* activation function, significantly reducing catastrophic forgetting (Shanahan et al., 2021). Following the reference method, we also use an optimizer that discards the value of the gradient and uses only its sign to determine the update direction. As a result, the parameters are being changed by a fixed step during the training. $\left(\boldsymbol{S}\right)$. ## 4 Experiments 4.1 Setup In order to ensure experiment's reproductivity, we evaluated our method on the popular and publicly available data sets. Data sets We use three common text classification data sets with different characteristics - Newsgroups (Lang, 2008), BBC News (Greene and Cunningham, 2006), and Consumer Finance Complaints2. The goal of the experiments was to evaluate our method on tasks with with different dif-2Source: https://huggingface.co/datasets/ consumer-finance-complaints | Text | Image | Audio | | | | | | |--------|---------|------------|-------------|------------|------------|------------|------------| | Model | Mem. | NG | BBC | Compl. | MNIST | CIFAR-10 | Sp. Comm. | | Naive | × | 5.25±0.03 | 21.65±2.56 | 9.56±0.33 | 11.29±3.05 | 10.00±0.01 | 21.54±3.78 | | LwF | × | 5.20±0.05 | 18.60±2.03 | 10.04±0.20 | 11.47±2.75 | 10.00±0.01 | 20.61±3.88 | | EWC | × | 5.13±0.13 | 21.97±2.14 | 10.16±0.31 | 11.19±2.70 | 10.00±0.01 | 32.93±4.92 | | SI | × | 5.27±0.01 | 19.43±2.96 | 10.00±0.62 | 14.90±6.52 | 10.00±0.01 | 9.99±0.27 | | CWR* | × | 4.63±0.60 | 22.98±1.20 | 10.13±0.33 | 10.40±0.54 | 10.00±0.01 | 10.32±0.26 | | GEM | ✓ | 35.89±3.80 | 70.99±7.68 | 33.74±2.50 | 52.27±5.20 | 23.40±2.71 | 21.01±2.06 | | A-GEM | ✓ | 9.44±7.14 | 59.10±17.52 | 9.20±0.01 | 65.37±4.53 | 26.43±5.27 | 17.45±6.90 | | Replay | ✓ | 22.45±3.09 | 59.61±3.17 | 16.46±4.62 | 69.02±4.90 | 32.93±4.56 | 12.23±1.28 | | E&E | × | 46.07±2.91 | 75.87±3.88 | 44.80±1.62 | 87.10±0.21 | 53.97±1.31 | 79.15±0.60 | | Ours | × | 47.27±3.63 | 78.49±3.92 | 44.97±0.86 | 87.62±0.14 | 56.27±1.21 | 80.11±1.30 | ficulty levels. We also conducted experiments for audio classification using Speech Commands (Warden, 2018) data set. For the evaluation purposes, we selected the 10 most representative classes from the Newsgroups, Complaints and Speech Commands. Finally, we also conducted experiments on the popular MNIST and CIFAR-10 data sets as image domain representatives. The data set summary is presented in Table 1. In all experiments we used a train set to train model incrementally, and afterward we performed a standard evaluation using a test set. Feature extractors For all text data sets, we used a Distilbert (Sanh et al., 2019), a light but still very effective alternative for large language models. Next, for Speech Commands, we utilized Pyannote (Bredin et al., 2020), a pretrained model for producing meaningful audio features. For image data sets, we used different extractors. MNIST features were produced by the pretrained VAE and CIFAR-10 has a dedicated BYOL model (see A.4 for more details). ## 4.2 Results The results of the evaluation are presented in Table 2. For all setups evaluated, our model performed best improving results of the main reference method (E&E) by up to 3 percent points (pp.). The improvement scale varies across the data sets. We also observed a significant difference in achieved accuracy between the DE&E and the standard continual learning methods. Simple regularizationbased methods completely fail in the class incremental scenario. It shows how demanding training ![4_image_0.png](4_image_0.png) the model incrementally is when a set of classes is not fixed, which often takes place in real-world scenarios. Furthermore, our method achieved these results without replaying training examples seen in the past, making it more practical relative to the SOTA memory-based methods (GEM, A-GEM, Replay) that store samples from every class. For the ensemble of 128 classifiers and Speech Commands data set, our architecture achieved an accuracy of more than 59 pp. higher than the best method with a memory buffer. One of the most important hyperparameters of the model is the number of classifiers (experts). To investigate how it affects accuracy, we evaluated our architecture in three variants: small - 64, normal - 128, and large - 1024 classifiers. The evaluation results are presented in Figure 3. We observed that increasing the ensemble size trans- Table 3: Accuracy (%) and standard deviation of DE&E evaluated on Class Incremental and Domain Incremental scenarios. We used the same setup as shown in Table 2. | Data set | Class Incremental | Domain incremental | |-----------------|---------------------|----------------------| | BBC News | 78.49±3.92 | 79.71±3.14 | | Newsgroups | 47.27±3.63 | 44.55±1.40 | | Complaints | 44.97±0.86 | 39.23±3.03 | | Speech Commands | 81.46±0.85 | 79.31±0.49 | | MNIST | 87.62±0.14 | 85.04±0.39 | | CIFAR-10 | 56.27±1.21 | 55.66±1.32 | lates to higher accuracy, and gain depends on the setup and data characteristics. The most significant improvement was observed on BBC and CIFAR10 where the large model achieved an accuracy of about 20pp. better than the small one. For the remaining data sets and the analogous setup, the gain was up to 5pp. We explain this phenomenon as the effect of insufficient specialization level achieved by smaller ensembles. If experts are forced to solve tasks that are too complicated they make mistakes often. Increasing the number of experts allows for dividing feature space into simpler sub-tasks. However, such a procedure has natural limitations related to the feature extractor. If features have low quality, increasing the number of experts will be ineffective. To select the optimal ensemble size we suggest using the elbow rule which prevents the model from being overparameterized and ensures reasonable accuracy. However, in general, we recommend choosing larger ensembles that are better suited for handling real-world cases. Since real-world environments require deployed models to quickly adapt to domain shifts, we tested our method in a domain incremental scenario. In such setup, each data batch can provide examples from multiple classes that can be either known or new (Van de Ven and Tolias, 2019). This way, the model needs to learn incrementally, being prone to frequent domain shifts. As shown in Table 3, the proposed method handles both scenarios with comparable accuracy. We observed improved accuracy for BBC News, but reduced for the remaining data sets. Such property can be beneficial when there is limited prior knowledge about the data or the stream is imbalanced (Aguiar et al., 2022). We have also investigated the importance of the presented expert selection method. We trained the DE&E method and for each training example, we allowed it to choose random experts (rather than the most relevant ones) with fixed probability p. As shown in Figure 4, the selection method has a strong influence on the model performance. Accuracy decreases proportionally to the p over all data sets studied. The proper expert selection technique is crucial for the presented method. It is worth noting that relatively easier data sets suffer less from loss of accuracy than hard ones because even randomly selected experts can still classify the data by learning simple general patterns. In more difficult cases like Newsgroups and Complaints data sets, model performance is comparable to random guessing when p > 0.5. ![5_image_0.png](5_image_0.png) ## 5 Conclusions In this paper, we proposed a domain-agnostic architecture for continual learning with a training procedure specialized in challenging class incremental problems. The presented architecture is based on the Mixture of Experts technique and handles many practical issues related to the deployment of text classification models in non-trivial real-world systems. As our main contribution, we introduced a fully differentiable *soft KNN* layer and a novel prediction weighting strategy. By conducting exhaustive experiments, we showed improvement in accuracy for all the cases studied and achieved SOTA results without using a memory buffer. This enables an effective and secure training, especially when working with sensitive textual data. The presented architecture is highly flexible, can effectively solve classification problems in many domains, and can be applied to real-world machine learning systems requiring continuous improvement. Such work enables researchers to make further steps toward overrunning many of the current challenges related to language technology applications. ## Limitations The main limitations of the proposed architecture are related to the presence of the frozen feature extractor. The accuracy of the classification module is proportional to the quality of features. Since the ensemble weak learners are single-layer neural networks, the entire feature extraction process relies on a pre-trained model that strongly limits the upper bound of classification accuracy. Such approach reduces the method complexity, but also makes it prone to errors when embeddings have low quality. Achieving accuracy at a satisfactory level, which is crucial in real world systems, requires the use of high quality feature extractors. Currently, plenty of pretrained SOTA models are available for free in domains such as text or image classification, but if such extractor is not available, does not produce reasonable features or is too expensive to use, our architecture may not be the best choice. Another issue is relatively long training time comparing to the reference methods (see A.3). The introduction of a differentiable *soft KNN* layer resulted in additional computational effort that clearly impacted the model complexity. This limits the use in low latency systems with machine learning models trained online. ## Ethics Statement The authors foresee no ethical concerns with the work presented in this paper, in particular concerning any kind of harm and discrimination. Since the presented architecture can have a wide range of usages, the authors are not responsible for any unethical applications of this work. ## Acknowledgements The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education and also partially funded by Department of Artificial Intelligence, Wroclaw Tech and by the European Union under the Horizon Europe grant OMINO (grant number 101086321). It was also partially co-funded by the European Regional Development Fund within the Priority Axis 1 "Enterprises and innovation", Measure 1.2. "Innovative enterprises, sub-measure 1.2.1. "Innovative enterprises - horizontal competition" as part of ROP WD 2014-2020, support contract no. RPDS.01.02.01-02-0063/20-00. ## References Shazia Afzal, Tejas Dhamecha, Nirmal Mukhi, Renuka Sindhgatta, Smit Marvaniya, Matthew Ventura, and Jessica Yarbro. 2019. Development and deployment of a large-scale dialog-based intelligent tutoring system. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 114–121, Minneapolis, Minnesota. Association for Computational Linguistics. Gabriel Aguiar, Bartosz Krawczyk, and Alberto Cano. 2022. A survey on learning from imbalanced data streams: taxonomy, challenges, empirical study, and reproducible experimental framework. arXiv preprint arXiv:2204.03719. Magdalena Biesialska, Katarzyna Biesialska, and Marta R Costa-Jussa. 2020. Continual lifelong learning in natural language processing: A survey. *arXiv* preprint arXiv:2012.09823. Hervé Bredin, Ruiqing Yin, Juan Manuel Coria, Gregory Gelly, Pavel Korshunov, Marvin Lavechin, Diego Fustes, Hadrien Titeux, Wassim Bouaziz, and Marie-Philippe Gill. 2020. pyannote.audio: neural building blocks for speaker diarization. In ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing, Barcelona, Spain. Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. 2020. Dark experience for general continual learning: a strong, simple baseline. *Advances in neural information processing* systems, 33:15920–15930. Yue Cao, Thomas Andrew Geddes, Jean Yee Hwa Yang, and Pengyi Yang. 2020. Ensemble deep learning in bioinformatics. *Nature Machine Intelligence*, 2(9):500–508. Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2019a. Efficient lifelong learning with a-gem. In *ICLR*. Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc'Aurelio Ranzato. 2019b. Continual learning with tiny episodic memories. *arXiv preprint arXiv:1902.10486, 2019*. Cyprien de Masson D'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. Advances in Neural Information Processing Systems, 32. Matthias Delange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Greg Slabaugh, and Tinne Tuytelaars. 2021. A continual learning survey: Defying forgetting in classification tasks. *IEEE* Transactions on Pattern Analysis and Machine Intelligence. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Thang Doan, Seyed Iman Mirzadeh, Joelle Pineau, and Mehrdad Farajtabar. 2022. Efficient continual learning ensembles in neural network subspaces. *arXiv* preprint arXiv:2202.09826. Robert M French. 1999. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4):128–135. Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, ZhongYi Lu, and Ji-Rong Wen. 2022. Parameter-efficient mixture-of-experts architecture for pre-trained language models. In *Proceedings of the 29th International Conference on Computational Linguistics*, pages 3263–3273, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Derek Greene and Pádraig Cunningham. 2006. Practical solutions to the problem of diagonal dominance in kernel document clustering. In *Proc. 23rd International Conference on Machine learning (ICML'06)*, pages 377–384. ACM Press. Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 3667– 3684, Florence, Italy. Association for Computational Linguistics. Yipeng Hu, Joseph Jacob, Geoffrey JM Parker, David J Hawkes, John R Hurst, and Danail Stoyanov. 2020. The challenges of deploying artificial intelligence models in a rapidly evolving pandemic. *Nature Machine Intelligence*, 2(6):298–300. Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. *Neural computation*, 3(1):79–87. Alina Karakanta, Sara Papi, Matteo Negri, and Marco Turchi. 2021. Simultaneous speech translation for live subtitling: from delay to display. In Proceedings of the 1st Workshop on Automatic Spoken Language Translation in Real-World Settings (ASLTRW), pages 35–48, Virtual. Association for Machine Translation in the Americas. Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu. 2021. Achieving forgetting prevention and knowledge transfer in continual learning. Advances in Neural Information Processing Systems, 34:22443– 22456. James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. *CoRR*, abs/1612.00796. Lang. 2008. 20 newsgroups dataset. Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim. 2020. A neural dirichlet process mixture model for task-free continual learning. *CoRR*, abs/2001.00689. Yang Li and Yi Pan. 2022. A novel ensemble deep learning model for stock prediction based on stock prices and news. *International Journal of Data Science and* Analytics, 13(2):139–149. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947. Vincenzo Lomonaco and Davide Maltoni. 2017. Core50: a new dataset and benchmark for continuous object recognition. In *Conference on Robot* Learning, pages 17–26. PMLR. David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6470–6479, Red Hook, NY, USA. Curran Associates Inc. Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim, and Scott Sanner. 2022. Online continual learning in image classification: An empirical survey. *Neurocomputing*, 469:28–51. Saeed Masoudnia and Reza Ebrahimpour. 2014. Mixture of experts: a literature survey. The Artificial Intelligence Review, 42(2):275. Artur Nowakowski, Krzysztof Jassem, Maciej Lison, Rafał Jaworski, Tomasz Dwojak, Karolina Wiater, and Olga Posesor. 2022. nEYron: Implementation and deployment of an MT system for a large audit & consulting corporation. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, pages 183–189, Ghent, Belgium. European Association for Machine Translation. Gabriel Peyré, Marco Cuturi, et al. 2019. Computational optimal transport: With applications to data science. Foundations and Trends® *in Machine Learning*, 11(5-6):355–607. Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022. SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4504–4524, Dublin, Ireland. Association for Computational Linguistics. Waseem Rawat and Zenghui Wang. 2017. Deep convolutional neural networks for image classification: A comprehensive review. *Neural computation*, 29(9):2352–2449. Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. *CoRR*, abs/1806.01246. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In *International Conference on Machine* Learning, pages 3987–3995. PMLR. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Iqbal H Sarker. 2022. Ai-based modeling: Techniques, applications and research issues towards automation, intelligent and smart systems. *SN Computer Science*, 3(2):158. Murray Shanahan, Christos Kaplanis, and Jovana Mitrovic. 2021. Encoders and ensembles for task-free continual learning. *CoRR*, abs/2105.13327. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. *Journal of Machine Learning Research*, 15(56):1929–1958. Gido M Van de Ven and Andreas S Tolias. 2019. Three scenarios for continual learning. *arXiv preprint* arXiv:1904.07734. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209. Yeming Wen, Dustin Tran, and Jimmy Ba. 2020. Batchensemble: an alternative approach to efficient ensemble and lifelong learning. *arXiv preprint* arXiv:2002.06715. Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. *arXiv preprint arXiv:2203.05482*. Yujia Xie, Hanjun Dai, Minshuo Chen, Bo Dai, Tuo Zhao, Hongyuan Zha, Wei Wei, and Tomas Pfister. 2020. Differentiable top-k with optimal transport. In Advances in Neural Information Processing Systems, volume 33, pages 20520–20531. Curran Associates, Inc. Yongquan Yang, Haijun Lv, and Ning Chen. 2021. A survey on ensemble learning under the era of deep learning. *arXiv preprint arXiv:2101.08387*. ## A Appendix A.1 Code A.2 Computing Resources A.3 Time Complexity A.4 Implementation Details Dataset E&E Ours Newsgroups 7.43 31.20 BBC News 14.96 151.79 Complaints 20.33 93.63 Sp. Commands 30.80 108.90 MNIST 28.01 270.30 CIFAR-10 104.25 355.82 Cha Zhang and Yunqian Ma. 2012. *Ensemble machine* learning: methods and applications. Springer. Code is currently available as a Github repository https://github.com/mateusz-wojcik-97/ domain-agnostic-architecture. The machine we used had 128 GB RAM, an Intel Core i9-11900 CPU, and an NVIDIA GeForce RTX 3060 GPU with 12GB VRAM. Every experiment was performed using the GPU. Table 4: Time (seconds) of training the ensemble models with 128 classifiers on one task. The comparison in training time between E&E and DE&E models is shown in Table 4. For all evaluated data sets, the training time of our model was higher than the time to train the reference method. The results vary between data sets. The introduction of a differentiable *soft KNN* layer resulted in additional computational effort that clearly impacted the time complexity of the model. We use PyTorch to both reproduce the E&E results and implement the DE&E method. For text classification we used pretrained Distilbert 3 model and for audio classification we used pretrained Pyannote 4 model, both from the Huggingface repository. We used a pre-trained ResNet-50 model as the feature extractor for the CIFAR-10 data set. The model is available in the following GitHub repository, 3https://huggingface.co/distilbert-base-uncased 4https://huggingface.co/pyannote/embedding https://github.com/yaox12/BYOL-PyTorch, and is used under MIT Licence. For MNIST, we trained a variational autoencoder on the Omniglot data set and utilized encoder part as our feature extractor. We based our implementation of the *soft* KNN layer on the code provided with https:// proceedings.neurips.cc/paper/2020/hash/ ec24a54d62ce57ba93a531b460fa8d18-Abstract. html. All data sets used are public. Dataset Network layers Newsgroups [1536, 1700, 768, 10] Complaints [1536, 955, 512, 10] BBC News [1536, 640, 5] Sp. Commands [512, 1256, 10] MNIST [512, 1256, 10] CIFAR-10 [2048, 1274, 10] Baselines We use Naive, LwF (Li and Hoiem, 2017), EWC (Kirkpatrick et al., 2016), SI (Zenke et al., 2017), CWR* (Lomonaco and Maltoni, 2017), GEM (Lopez-Paz and Ranzato, 2017), A-GEM (Chaudhry et al., 2019a) and Replay (Chaudhry et al., 2019b) approaches as baselines to compare with our method. We utilize the implementation from Avalanche (https://avalanche. continualai.org/), a library designed for continual learning tasks. The main purpose of this comparison was to determine how the proposed method performs against classical approaches and, in particular, against the methods with memory buffer, which gives a significant advantage in class incremental problems. The recommended hyperparameters for each baseline method vary across usages in literature, so we chose them based on our own internal experiments. For a clarity, we keep hyperparameter naming nomenclature from the Avalnache library. For EWC we use *lambda* = 10000. The LwF model was trained with *alpha* = 0.15 and *temperature* = 1.5. For SI strategy, we use *lambda* = 5e7 and eps = 1e − 7. The hyperparameters of the memory based approach GEM were set as follows: memory_*strength* = 0.5, patterns_per_exp = 5, which implies that with every task, 5 examples will be accumulated. This has a particular importance when the number of classes is large. With this setup and 10 classes in data set, memory contains 50 examples after training on all tasks. Having a large memory buffer makes achieving high accuracy much easier. For the A-GEM method, use the same number of examples in memory and sample_*size* = 20. All models were trained using Adam optimizer with a learning_*rate* of 0.0005 and batch_*size* of 60. We chose cross entropy as a loss function and performed one training epoch for each experience. To fairly compare baseline methods with ensembles, as a backbone we use neural network with a similar number of parameters (as in ensemble). Network architectures for each experimental setup are shown in Table 5. All baseline models were trained by providing embeddings produced by feature extractor as an input. Ensembles. We used E&E (Shanahan et al., 2021) as the main reference method. It uses an architecture similar to that of a classifier ensemble, however the nearest neighbor selection mechanism itself is not a differentiable component and the weighting strategy is different. In order to reliably compare the performance, the experimental results of the reference method were fully reproduced. Both the reference method and the proposed method used exactly the same feature extractors. Thus, we ensured that the final performance is not affected by the varying quality of the extractor, but only depends on the solutions used in the model architecture and learning method. Both E&E and our DE&E were trained with the same set of hyperparameters (excluding hyperparameters in the *soft KNN* layer for the DE&E). We use ensembles of sizes 64, 128 and 1024. Based on the data set, we used different hyperparameter sets for the ensembles (Table 6). The keys for classifiers in ensembles were randomly chosen from the standard normal distribution and normalized using the L2 norm. The same normalization was applied to encoded inputs during lookup for matching keys. Soft KNN. We use the Sinkhorn algorithm to perform the forward inference in *soft KNN*. The Sinkhorn algorithm is useful in entropy-regularized optimal transport problems thanks to its computational effort reduction. The *soft KNN* has O(n) complexity, making it scalable and allows us to safely apply it to more computationally expensive problems. The values of *soft KNN* hyperparameters were | Dataset | Classifiers | Neighbors | Batch size | Learning rate | Weight Decay | |--------------|---------------|-------------|--------------|-----------------|----------------| | 64 | 16 | | | | | | Newsgroups | 8 | 0.0001 | | | | | 128 | 32 | | | | | | 1024 | 64 | | | | | | 64 | 8 | | | | | | BBC News | 128 | 16 | 1 | 0.01 | | | 1024 | 32 | | | | | | 64 | 16 | | | | | | Complaints | 128 | 32 | 8 | 0.0001 | | | 1024 | 64 | 0.0001 | | | | | 64 | 16 | | | | | | Sp. Commands | 128 | 32 | 8 | 0.001 | | | 1024 | 64 | | | | | | MNIST | 128 | 16 | 60 | 0.0001 | | | CIFAR-10 | 128 | 16 | 60 | 0.0001 | | σ = 0.0005 and L = 400. We utilize the continuous character of an output vector to weight the ensemble predictions. It is worth noting that we additionally set the threshold of the minimum allowed soft KNN score to 0.3. It means every element in γ lower than 0.3 is reduced to 0. We reject such elements because they are mostly the result of nonconverged optimization and do not carry important information. In this way, we additionally secure the optimization result to be as representative as possible.
caciolai-etal-2023-regression
Regression-Free Model Updates for Spoken Language Understanding
https://aclanthology.org/2023.acl-industry.52
In real-world systems, an important requirement for model updates is to avoid regressions in user experience caused by flips of previously correct classifications to incorrect ones. Multiple techniques for that have been proposed in the recent literature. In this paper, we apply one such technique, focal distillation, to model updates in a goal-oriented dialog system and assess its usefulness in practice. In particular, we evaluate its effectiveness for key language understanding tasks, including sentence classification and sequence labeling tasks, we further assess its effect when applied to repeated model updates over time, and test its compatibility with mislabeled data. Our experiments on a public benchmark and data from a deployed dialog system demonstrate that focal distillation can substantially reduce regressions, at only minor drops in accuracy, and that it further outperforms naive supervised training in challenging mislabeled data and label expansion settings.
# Regression-Free Model Updates For Spoken Language Understanding Andrea Caciolai Amazon Alexa AI andccl@amazon.it Verena Weber Amazon Alexa AI wverena@amazon.de Alessandro Pedrani Amazon Alexa AI pedrana@amazon.it ## Abstract In real-world systems, an important requirement for model updates is to avoid regressions in user experience caused by flips of previously correct classifications to incorrect ones. Multiple techniques for that have been proposed in the recent literature. In this paper, we apply one such technique, focal distillation, to model updates in a goal-oriented dialog system and assess its usefulness in practice. In particular, we evaluate its effectiveness for key language understanding tasks, including sentence classification and sequence labeling tasks, we further assess its effect when applied to repeated model updates over time, and test its compatibility with mislabeled data. Our experiments on a public benchmark and data from a deployed dialog system demonstrate that focal distillation can substantially reduce regressions, at only minor drops in accuracy, and that it further outperforms naive supervised training in challenging mislabeled data and label expansion settings. ## 1 Introduction Machine learning models that are deployed in realworld applications typically require regular updates to accommodate data distribution shifts or changes to the output label space. The retraining process, even if it leads to stable or improved overall accuracy, can result in different sample-level predictions due to its stochastic nature. In an application setting, that in turn can change (or even break) specific functionalities. A key requirement for model updates in real-world applications is therefore to minimize regressions in user experience. For classification models, Yan et al. (2021) formalized this requirement as minimizing the number of *negative flips* of a model, defined as the number of previously correct classifications that turn incorrect for a new model. Previous work proposed several methods towards that goal (Shen et al., 2020; Yan et al., 2021; Zhao et al., 2022; Träuble et al., Tobias Falke Amazon Alexa AI falket@amazon.de Davide Bernardi Amazon Alexa AI dvdbe@amazon.it 2021) that rely on knowledge distillation, model ensembling or Bayesian learning. In this work, we focus on model update-caused regressions in goal-oriented dialog systems, and in particular on updates of spoken language understanding models. In real-world dialog systems, a negative flip would mean that a request that was previously correctly understood is now interpreted as a different intent (or with different slots) and therefore leads to a regression in user experience. While previous work explored the reduction of negative flips on various tasks, spoken language understanding remains unexplored (see section 2). We therefore apply *focal distillation* (Yan et al., 2021), the most applicable existing technique, to this use case. Moreover, the use in a real-world goal-oriented dialog system raises additional questions that we address. Specifically, we study the following: Effectiveness for DC and IC: We test focal distillation on domain classification (DC) and intent classification (IC), two key tasks in spoken language understanding, using public data as well as internal datasets from a real-world dialog system. Applicability to SL: We further test the effectiveness for slot labeling (SL), a sequence labeling task that requires an extension of focal distillation to handle tasks with token-level supervision. Repeated Model Updates: We simulate multiple iterations of retraining with focal distillation to study its long-term effect, in particular, whether the coupling of new and old model via distillation restricts the model's ability to learn new features. Noisy Labels: Finally, we also study the effect of mislabeled data. In the presence of annotation errors, focal distillation bears the risk that it enforces prediction consistency on samples that have supposedly correct classifications in the old model, but are actually mislabeled, preventing the new model to predict the true correct label. We run extensive experiments for DC, IC and 538 SL tasks on SLURP (Bastianelli et al., 2020), a public benchmark, and internal datasets from our real-world goal-oriented dialog system. We find that focal distillation is effective for DC and IC and reduces negative flips by up to 30% relative at no or only marginal decreases in accuracy. For SL, a naive application as a token-level loss is effective as well and brings 8% relative reduction on average. When simulating repeated retraining over time, focal distillation can restrict the model's ability to learn new labels, but this can be remedied by warm-starting the model with the previous model's weights. Finally, we also show that focal distillation is beneficial even under annotation errors, and can be made even more robust by adding noise-awareness to the loss. ## 2 Related Work Enabling regression-free model updates is a relatively recent line of research. Shen et al. (2020) first studied it for computer vision problems with the goal of learning backwards-compatible image representations. Yan et al. (2021) introduced the notion of negative flips for classification tasks and coined the minimization of them as positive-congruent training. They proposed focal knowledge distillation, a variant of traditional teacher-student distillation (Hinton et al., 2015), and model ensembling as techniques to achieve positive-congruent training. Zhao et al. (2022) continued this line of work by extending and combining the distillation and ensembling ideas into a single method called ELODI. With a slightly different focus, namely accepting or rejecting the predictions of a new model rather than training it, Träuble et al. (2021) proposed a Bayesian approach to reduce negative flips. In our work, we focus on Yan et al.'s (2021) focal distillation method as it is most applicable to our real-world use case where we cannot afford the use of model ensembles because of their computation, storage and latency overhead. Xie et al. (2021) first applied the methods to NLP tasks. They found that negative flips are also prevalent during model updates for NLP tasks and demonstrated mitigations with distillation and ensembling methods in line with the earlier work. Concurrent to our work, Cai et al. (2022) extended positive-congruent training ideas to structured prediction tasks like parsing, which require extensions such as sequence distillation (Kim and Rush, 2016) or reranking. Also concurrent to our work, Schumann et al. (2023) introduced an importanceweighted interpolation method that they find to outperform focal distillation on intent classification benchmarks. We plan to incorporate their findings in our future work. Continual learning (also known as incremental learning, sequential learning or lifelong learning) is closely related to our work (McCloskey and Cohen, 1989; Silver and Mercer, 2002; Biesialska et al., 2020). While we focus specifically on avoiding negative flips, continual learning is more general and studies continuous training of models on evolving data and tasks, with a particular focus on avoiding catastrophic forgetting. The latter is a challenge if data for previously learned features is no longer available; it is however less relevant for our application scenario, a real-world goal-oriented dialog system, with ongoing user interactions covering all features. ## 3 Methods Application Scenario Spoken language understanding models are a core component in many goal-oriented dialog systems. They map a natural language request to a machine-readable meaning representation that the system can act upon to fulfill the request. In our experiment setup, this is modelled as a combination of domain classification (DC), intent classification (IC) and slot labeling (SL). Consider the example *Play Michael Jackson*. DC recognizes this request as a *Music* request, IC detects a *PlayMusic* intent and SL identifies Michael Jackson as *Artist* slot, whereas *Play* does not represent a slot in this case. Negative Flips Let x ∈ X be a model input (e.g. an utterance), y ∈ Y its ground truth label (e.g. an intent label) and p(y|x) a model that can be used to predict yˆi = argmaxy p(y|xi). A negative flip occurs if a new model incorrectly predicts a sample that the previous model predicted correctly, i.e. if yˆ new i̸= yi and yˆ old i = yi. The negative flip rate (NFR) measures the fraction of samples where a correct prediction turns incorrect between two models in a dataset with size N. Yan et al. (2021) define it as ${\qquad NFR=\frac{1}{N}\sum_{i=1}^{N}1(\hat{y}_i^{new}\neq y_i\wedge\hat{y}_i^{old}=y_i)}$ (1) ##### Focal Distillation (FD) Focal distillation, as in ... Focal Distillation (FD) Focal distillation, as introduced by Yan et al. (2021) and illustrated in Figure 1, aims to reduce negative flips by minimizing ![2_image_0.png](2_image_0.png) the loss $$\mathcal{L}_{CE}(\hat{y}^{new},y)+\lambda\mathcal{L}_{FD}(p^{new}(y|x),p^{old}(y|x))\tag{2}$$ where LCE denotes the standard cross entropy (CE) loss between new model and ground truth and LF D is the additional focal distillation (FD) loss term that discourages negative flips, with a trade-off parameter λ. This loss term is formally defined as $$\begin{array}{c}{{{\mathcal{L}}_{F D}=-{\mathcal{F}}(x,y)\cdot{\mathcal{D}}(p^{n e w},p^{o l d})}}\\ {{{\mathcal{F}}(x,y)=\alpha+\beta\cdot1({\hat{y}}^{o l d}=y),}}\end{array}\quad(3)$$ where D is a distance between the output distributions of the new and old model, and F(*x, y*) is a "filtering" function. It applies a weight α to all samples in the training set and an additional weight β to the samples correctly predicted by the old model. When α = 1 and β = 0, focal distillation reduces to ordinary distillation. When α = 0 and β > 0, we are only applying the distillation objective to the training samples predicted correctly by the old model. In their work, Yan et al. (2021) experiment with two choices for D: Kullback-Leibler (KL) divergence between temperature-scaled p(y|x) and mean-squared error (MSE) between pre-softmax logits. Since the latter performed better in their experiments, we adopt it. Hence, D is defined as $${\mathcal{D}}(p^{n e w},p^{o l d})={\frac{1}{K}}\sum_{j=1}^{K}(z_{j}^{n e w}(x)-z_{j}^{o l d}(x))^{2}\ (4)$$ where zj (x) is element j of the K-dimensional pre-softmax logit vector for x. FD for Slot Labeling In slot labeling, a sequence of labels has to be predicted instead of just a single label as in DC or IC. Naturally, there are two options to apply distillation in that case: either apply the loss independently to each token or use the reference model's sequence-level decision for supervision. For the latter, Wang et al. (2020) proposed multiple techniques. In this work, we resort to the simpler token-level distillation for now and leave sequence-level distillation for future work. We compute the FD loss for each token j in the sequence i with length M as $${\mathcal{L}}_{T o k}^{F D}=-\sum_{j=1}^{M}{\mathcal{F}}(x_{j},y_{j}){\mathcal{D}}\left(p_{j}^{n e w},p_{j}^{o l d}\right)\qquad\mathrm{(5)}$$ Notice that this formulation works both if the models perform token-level decisions, and if they perform sequence-level decisions. In the latter case, when training the new model, the token-level FD loss is summed to the sequence-level loss. FD with Noisy Labels FD biases the new model towards the old model's predictions when those predictions are correct. To discern correct predictions we rely on accurate labels. However, real-world data is often noisy. Therefore we investigate the combination of FD with label noise detection. We experiment with Area Under the Margin (AUM), a method suggested by Pleiss et al. (2020). The method leverages the observation that mislabeled data hurts generalization, and thus monitors the training dynamics to define the *margin* of a sample. The margin M at epoch t of sample (*x, y*) measures how much larger the assigned logit is than the largest other logit. Let z t(x) ∈ R c be the logit vector of sample (*x, y*) at epoch t. Then M at t is $$M^{(t)}(x,y)=z_{y}^{(t)}(x)-m a x_{k\neq y}\,z_{k}^{(t)}(x)\quad\quad(6)$$ where logit z (t) kcorresponds to class k. The first term corresponds to the assigned logit, while the second is the largest other logit. If a sample is mislabelled, the assigned logit tends to receive weaker gradient updates due to the tension between generalization from similar, correctly labeled samples and memorization of the sample itself. For instance, an utterance that is semantically similar to others labelled as *PlayMusic*, but is incorrectly labelled as *GetWeather*, results in the model predicting the true class with more confidence (higher logit) and assigning lower logit (confidence) to the incorrect label. As a consequence, a correctly labeled sample will have a larger margin than a mislabeled sample in expectation. Each sample's margin is measured during training and averaged over all epochs T: $$A U M(x,y)={\frac{1}{T}}\sum_{t=1}^{T}M^{t}(x,y)$$ We then use this measure as an additional term in the FD objective to re-weight the FD loss contributions of mislabeled samples: $${\mathcal{L}}_{A U M}^{F D}=-\underbrace{g(A U M(x,y))}_{\mathrm{noise-aware~weight}}\underbrace{F(x,y){\mathcal{D}}(p^{n e w},p)}_{\mathrm{standard~FD~loss}}$$ new, pold) where $g(\cdot)$ simply rescales $AUM$ into $[0,1]$. (8) ## 4 Experimental Setup We run experiments on both public and internal data. For our experiments on public data, we use SLURP (Bastianelli et al., 2020), an English multi-domain dataset for NLU spanning across 18 domains, 60 intents and 55 slot types (ca. 16,000 utterances). In addition, we present results on our internal datasets for English and German. These datasets comprise live traffic utterances, deidentified and anonymized for privacy reasons, then annotated to enable supervised training. For the internal datasets, the number of slot types is domainspecific. In the experiments for SL we employ three domain-specifc internal datasets, referred to as INTG, *INT-M* and *INT-S*, that have 88, 101 and 35 slot types, respectively. Results on public data are averaged across 5 seeds, while we only train once on | Accuracy ↑ | NFR ↓ | | | | | |--------------|------------|------------|-----------|-----------|------| | Task | Method | abs. | rel. | abs. | rel. | | DC | Baseline | 91.36±0.35 | - | 2.17±0.16 | - | | FD | 90.67±0.59 | -0.75 | 1.57±0.59 | -27.64 | | | IC | Baseline | 88.63±0.45 | - | 2.47±0.41 | - | | FD | 88.24±0.51 | -0.44 | 1.63±0.53 | -33.79 | | Table 1: Test results for applying FD to DC and IC on SLURP under data update. | Task | Dataset | Accuracy ↑ | NFR ↓ | |--------------|----------------------|--------------|---------| | DC | English | +0.12 | -54.94 | | German | -0.07 | -9.31 | | | IC | English Cross-Domain | 0.39 | -35.59 | | German INT-M | -0.02 | -3.31 | | $$\mathbf{\Pi}(7)$$ internal data. In our experiments we examine two settings: (i) A *data update* scenario, in which we only update the training data leaving the model architecture unchanged. In this scenario, 50% of the samples are left out when training the old model, while the complete dataset is used when training the new model, either with the baseline approach or FD. (ii) A *label introduction* scenario, in which we gradually introduce a new label in the dataset, training n models in sequence on datasets in which we uniformly increase the support for that label. For implementation details, we refer the reader to Appendix C. Across all experiments, we compare FD with the *baseline* approach of simply retraining the model on the whole training data, without any additional signal from the previous model. All models employ a BERT-based (Devlin et al., 2018) architecture: a pre-trained encoder extracts contextualized semantic word embeddings, then fed either to a Multi-Layer Perceptron (MLP) in case of DC and IC, or to a Conditional Random Field (CRF) (Lafferty et al., 2001; Lample et al., 2016) in case of SL, to obtain either sequence-level or word-level predictions. We experiment also with the introduction of *warm start* for the new model, i.e. the model's weights are initialized with those of the previous model. See Appendix A for more details. ## 5 Experimental Results We present results for each of the research questions raised in the introduction. ![4_image_0.png](4_image_0.png) | Task | Method | Accuracy ↑ | NFR ↓ | | | |--------|----------|--------------|---------|---------|----| | abs. | rel. | abs. | rel. | | | | SL | Baseline | 92.9±0.2 | - | 1.6±0.1 | - | | FD | 92.4±0.1 | -0.46 | 1.5±0.1 | -7.93 | | Table 3: Test results for applying FD to slot labeling on SLURP under data update. | Task | Dataset | Accuracy ↑ | NFR ↓ | |---------------|---------------|--------------|---------| | English INT-G | +0.09 | -15.94 | | | SL | English INT-M | -0.01 | -3.66 | | English INT-S | -0.01 | -2.86 | | Is FD effective for DC and IC? Results on public data are displayed in Table 1. FD reduces NFR for both DC and IC in similar magnitude, by 27.64% for DC and by 33.79% for IC, while only decreasing accuracy slightly by 0.75% for DC and 0.44% for IC. Results on internal data are shown in Table 2. On internal data, we can only disclose relative changes to baseline, no absolute metrics. For experiments on German data we use the full dataset and the production model. While we consider all domains for DC, we only consider intents within a single domain for IC since training is expensive and time-consuming. For English we only use 10% of the full training set and a surrogate model from Huggingface to speed up the experiments (see Appendix A). Also on internal data, FD reduces NFR for both DC and IC. Again accuracy is only slightly reduced, for DC on English data we even see a slight increase in accuracy. NFR reduces less significantly on German data, which can be explained by the fact that the training set for the old and the new model are the same and negative flips only stem from randomness in training. For the English dataset, we simulate an increase in training data, as explained in Section 4. Can FD be used for token-level SL? As mentioned above, we employ models with a CRF layer for SL, able to make structured predictions. However, we experiment with token-level FD (see Equation (5)) that takes as input the token logits directly, instead of the top-scoring label path from the CRF. Therefore, the CRF layer of the student model is not affected by the additional distillation objective. Results on public data are shown in Table 3. With only a slight decrease in accuracy of -0.46%, NFR can be reduced by 7.93%. Results on internal data are reported in Table 4. Here we see an even larger reduction of 15.94% in NFR on the INT-G dataset, while a less significant reduction is observed on the other datasets: -3.66% on INT-M and -2.86% on INT-S. For all datasets, changes in accuracy are negligible. We conclude that FD on token-level SL reduces NFR without harming accuracy. Does repeated FD restrict learning? Figure 2 reports the comparison of the baseline approach with FD in the label introduction scenario (spanning 5 iterations). We can observe how FD does not seem to negatively influence the overall accuracy over time of the model; on the contrary, the additional loss term seems to be moderately beneficial in helping the model learn the task compared to the baseline. The approach also behaves well in reducing the overall NFR of the new model. Interestingly, standard CE is slightly superior in absolute terms with respect to FD in learning the new label distribution. However, the gap remains fixed over time, therefore FD is not hindering the ability of | Task | Approach | Original | 20% Noise | 40% Noise | 60% Noise | | | | | |------------|------------|------------|-------------|-------------|-------------|------------|---------|---------|---------| | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | | | | Baseline | - | - | - | - | - | - | - | - | | | DC | FD | 0.34% | -53.02% | 0.46% | -43.07% | 0.10% | -32.84% | 0.75% | -47.81% | | FD+AUM | 0.11% | -24.01% | 0.46% | -43.46% | -0.21% | 6.13% | 0.72% | -31.53% | | | Baseline | - | - | - | - | - | - | - | - | | | IC | FD | 0.09% | -30.05% | -0.04% | 30.56% | 0.05% | -37.49% | 0.12% | -35.68% | | FD+AUM | 0.20% | -34.33% | 0.09% | -8.37%. | 0.02% | -18% | 0.12% | -32.88% | | Table 5: Test results (rel. change to baseline) for applying FD with AUM both on original internal dataset and on internal dataset with artificially added noise. | Task | Approach | Original | 20% Noise | 40% Noise | 60% Noise | | | | | |------------|------------|------------|-------------|-------------|-------------|------------|---------|---------|---------| | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | | | | Baseline | 0.90742 | 2.2416 | 0.7215 | 2.5331 | 0.505 | 5.9067 | 0.3692 | 3.5194 | | | DC | FD | -0.06% | -52.50% | -0.08% | -21.24% | 6.91% | -77.42% | 3.79% | -63.38% | | FD+AUM | 0.03% | -52.50% | 0.29% | -29.65% | 6.24% | -63.19% | 2.65% | -42.36% | | | Baseline | 0.8807 | 2.5106 | 0.6518 | 4.2319 | 0.4913 | 4.1913 | 0.2887 | 4.1034 | | | IC | FD | -0.01% | -25.89% | -6.15% | 4% | -3.26% | 22.97% | -3.05% | 31.91% | | FD+AUM | -0.28% | -12.59% | -1.58% | 2.06% | 0.98% | -14.15% | -6.44% | 12.48% | | Table 6: Test results (rel. change to baseline) for applying FD with AUM both on SLURP original dataset and on SLURP with artificially added noise. the model to learn, but only introducing an initial delay. Remarkably, FD is able to reduce regression on the newly introduced label already with a handful of samples, and consistently remains lower than the baseline on the NFR metric. Interestingly but not surprisingly, warm start helps both approaches in both metrics, with respect to the non-warm start alternative. This suggests that, in general, warm start is a useful strategy for retaining model performance during an update. However, it is clear from the results how FD benefits more from warmstart than the baseline, in terms of both accuracy improvement and NFR reduction. Further results for this setting (and the specular one of gradual removal of a label) are reported in Appendix E. Can FD cope with noisy labels? In order to verify the extent to which FD coupled with AUM is capable of dealing with increasing level of noise we experiment both on the public SLURP dataset as well as on the internal English dataset, and we also test the approach on specific versions of those datasets manipulated to artificially introduce varying levels of noise: 20%, 40%, 60%. The algorithm used to generate noise, together with a study of how AUM is able to detect it, is reported in Appendix D. Results on the internal dataset and SLURP are reported in tables 5 and 6, respectively. Overall we observe that integrating AUM into FD does not lead to significant improvement over vanilla FD. We believe the reason behind the lack of improvement is twofold: first, there might be a more effective way to integrate the AUM signal into the FD objective; secondly, the models trained with the baseline, especially on internal data, already exhibit low NFR, therefore there is little margin for improvement. On the other hand, FD with AUM is not detrimental, neither on original nor the noisy datasets: when the level of label noise is significant, AUM helps FD recovering its performance; when the label noise if less present (if at all), AUM does not significantly decrease FD performance. ## 6 Conclusions In this paper, we presented an extensive set of experiments to evaluate the effectiveness of focal distillation to reduce negative flips in a real-world goal-oriented dialog system. We found the technique to be effective in DC, IC and SL with only minor accuracy drops. When used repeatedly over multiple updates, the effect remains while still allowing the model to learn new labels. In addition, the method is also robust to labeling errors in the training data. As future work, we plan to extend our experiments to alternative techniques for negative flip reduction, in particular those proposed concurrent to our work, and to experiment with potentially more powerful sequence-level distillation for slot labeling. ## Limitations A first limitation of our contribution stems from the fact that to compute the focal distillation term in the loss, predictions from the old model are required. This additional stream of information will therefore cause a slight increase in the required computational power. In this work, we only experimented with FD based on mean-squared error between pre-softmax logits as that approach yielded best results in the paper our experiments are based on, leaving experiments using FD with Kullback-Leibler divergence between temperature-scaled softmax outputs for future research. Due to inference time limitations in a production setting, we did not investigate the reduction of negative flips with ensembles either. Finally, we have not tested more principled approaches for NER distillation and focused on tokenlevel distillation leaving sequence-level distillation for future work. ## References Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser. 2020. SLURP: A spoken language understanding resource package. In *Proceedings of the 2020 Conference on Empirical Methods* in Natural Language Processing (EMNLP), pages 7252–7262, Online. Association for Computational Linguistics. Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. 2020. Continual lifelong learning in natural language processing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6523–6541, Barcelona, Spain (Online). International Committee on Computational Linguistics. Deng Cai, Elman Mansimov, Yi-An Lai, Yixuan Su, Lei Shu, and Yi Zhang. 2022. Measuring and reducing model update regression in structured prediction for nlp. In *NeurIPS 2022*. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina N. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. WA Falcon. 2019. Pytorch lightning. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 4163– 4174, Online. Association for Computational Linguistics. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pages 109–165. Elsevier. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. 2020. Identifying mislabeled data using the area under the margin ranking. *Advances in* Neural Information Processing Systems, 33:17044– 17056. Raphael Schumann, Elman Mansimov, Yi-An Lai, Nikolaos Pappas, Xibin Gao, and Yi Zhang. 2023. Backward compatibility during data updates by weight interpolation. Yantao Shen, Yuanjun Xiong, Wei Xia, and Stefano Soatto. 2020. Towards backward-compatible representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6368–6377. Daniel L. Silver and Robert E. Mercer. 2002. The task rehearsal method of life-long learning: Overcoming impoverished data. In *Advances in Artificial Intelligence*, pages 90–101, Berlin, Heidelberg. Springer Berlin Heidelberg. F. Träuble, J. von Kügelgen, M. Kleindessner, F. Locatello, B. Schölkopf, and P. Gehler. 2021. Backward-compatible prediction updates: A probabilistic approach. In *Advances in Neural Information Processing Systems 34 (NeurIPS 2021)*, pages 116–128. Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Fei Huang, and Kewei Tu. 2020. Structure-level knowledge distillation for multilingual sequence labeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3317–3330, Online. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. Yuqing Xie, Yi-An Lai, Yuanjun Xiong, Yi Zhang, and Stefano Soatto. 2021. Regression bugs are in your model! measuring, reducing and analyzing regressions in NLP model updates. In *Proceedings of the* 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6589–6602, Online. Association for Computational Linguistics. Sijie Yan, Yuanjun Xiong, Kaustav Kundu, Shuo Yang, Siqi Deng, Meng Wang, Wei Xia, and Stefano Soatto. 2021. Positive-congruent training: Towards regression-free model updates. In *Proceedings of the* 2021 Conference on Computer Vision and Pattern Recognition, pages 14299–14308. Yue Zhao, Yantao Shen, Yuanjun Xiong, Shuo Yang, Wei Xia, Zhuowen Tu, Bernt Schiele, and Stefano Soatto. 2022. Elodi: Ensemble logit difference inhibition for positive-congruent training. ## A Experiment Details The experiments have been run on p3.16xlarge EC2 instances1, equipped with eight NVIDIA Tesla V100 GPUs2. As optimization framework, PyTorch (Paszke et al., 2019) (version 1.10.0) has been used, along with PyTorch Lightning (Falcon, 2019) (version 1.8.6) for easier development and faster experimental iterations. Across all the experiments on English corpora, the encoder is based on pre-trained BERT (Devlin et al., 2018) models from HuggingFace (Wolf et al., 2019), with their weights unfrozen during training, hence allowing their fine-tuning. For DC and IC experiments on German corpora, a custom pretrained tiny-bert (Jiao et al., 2020) is used. All models feature a two-layer, fully-connected MLP mapping word or sentence embeddings into labelspace. Additionally, the model for SL employs a CRF layer to make structured predictions about the label sequence, taking estimated label-label transition probabilities into account. For the SL experiments, we obtain subword token-level embeddings on the English corpora by summing the hidden states of the last 3 layers of the encoder. On the German corpora, the model considers the last hidden states. In the latter, word-level embeddings (aligned with slot labels) are obtained performing an average subword pooling, i.e. for each input text token we take the average embedding of all its corresponding subword tokens. In the former, the last subword token embedding is considered. For models trained with Focal Distillation, we follow the authors' suggestion in Yan et al. (2021) and set α = 1, β = 5 and λ = 1 for all experiments. Table 7 reports the hyperparameters used to train the models across all the experiments. All models are trained to convergence using early stopping, monitoring model performance on a held-out validation set as convergence condition. ## B Data Update Scenario Over time, the data available to train a predictive model can change for various reasons. In a supervised learning setting, one simple reason may be the acquisition of more labeled data: human annotators review existing unlabeled instances and assign labels to them, enlarging the training corpus. 1https://aws.amazon.com/ec2/instance-types/ p3/ 2https://www.nvidia.com/en-us/data-center/ v100/ | Parameter | Value | |-------------------------|---------------------| | Learning Rate | 5e-5 | | Optimizer | Adam | | Max epochs | 20 | | Embedding size | 768 | | Hidden size | 256 | | Dropout | 0.1 | | Activation | ReLU | | Validation split | 0.1 | | Early stopping metric | Validation F1 score | | Early stopping delta | 1e-3 | | Early stopping patience | 5 epochs | | Focal Distillation α | 1 | | Focal Distillation β | 5 | | FD trade-off λ | 1 | Table 7: Hyperparameter values for models used in the experiments. In this work, we refer to this event as a *data update*, and study the impact of applying FD in this scenario. The results presented in section 5 examine in particular a scenario in which the amount of available training data is doubled for the new model. This is realized by simply training the old model on 50% of the overall training dataset, then training the new model (with either the baseline or FD) on 100% of the training samples. ## C New Label Scenario Another possible reason for a change in the training data is the addition of data supporting new classes. New classes appearing in the training dataset of an already deployed model may be the result of the definition of a new downstream feature that the model has to support. In this work, we refer to this event as a *label introduction*, and study the impact of applying FD in this scenario. Usually, data supporting a new feature is not readily available, but rather comes in batches as human annotators work to provide new labeled data based on the feature definition. For this reason, in this work we study the impact of FD on a gradual introduction of a new label. In particular, the scenario is implemented as follows: (i) a label is chosen for scenario simulation and completely removed from the dataset, i.e. all the samples belonging to that class are removed; (ii) a *schedule* for introducing the label in the following n "releases" of the dataset is stated. For simplicity, we assume the rate at which newly labelled data becomes available is constant over time, and therefore the schedule simply dictates that a fixed amount of labelled data is reintroduced at each iteration. To do so, data pertaining to the removed labels is evenly partitioned in n batches, and the i-th dataset is simply the union of the previous dataset in the sequence and the i-th batch. In this work we set the number of releases to n = 5. We run experiments on the SLURP dataset for Domain Classification using the qa domain and for Intent Classification using the news_query intent. This choice reflects two competing needs: on one hand, we want to reflect the observed reality of new features not becoming the predominant classes in the dataset in terms of data, even after a long time; on the other hand, to report statistically significant results we need more than a handful of samples to be removed. As a result, we choose labels that are neither the prevalent classes nor the scarcest, but are averagely represented. When a label is introduced for the first time, we set the MSE loss in FD to zero for samples with the new label as the previous model cannot provide useful information for those. That means we zero out the logits for the new label coming from the old model by concatenating a zero tensor to the logits coming from the old model. As a result, the contribution to the MSE loss in FD is zero for the new label, falling back to only Cross Entropy loss for samples with the new label. ## D Area Under The Margin And Noise Generation Procedure Pleiss et al. (2020) introduce the concept of Area Under the Margin (AUM), and demonstrate its ability to identify mislabelled samples in syntheticallymislabeled versions of popular Computer Vision datasets, such as CIFAR10. Their approach makes no assumption about the specific task under consideration, but only draws on the insight that a neural network's training dynamics contain salient signals about noisy data and generalization. In this work, however, before testing the interaction of AUM with FD in a noisy data setting we test the ability of AUM of spotting noise in our Natural Language Understanding (NLU) setting to begin with. To do so, we repeat the synthetically-mislabeled experiment on our datasets. ## Algorithm 1 Label Noise Generation Input: true labels Y , noise level nl ∈ [0, 1] Output: assigned labels Y˜ 1: N ← | Y | 2: L ← {y | y ∈ Y } 3: Nf lip ← ⌈nl· N⌉ 4: for i = 0 → N*f lip* do 5: yi ← sample an item uniformly at random from Y without replacement 6: L˜ ← {y | y ∈ L ∧ y ̸= yi} 7: y˜i ← sample a label uniformly at random from L˜ 8: change yiinto y˜iin Y 9: **end for** Table 8 reports the noise levels estimated in the synthetically-mislabeled datasets. Pleiss et al. (2020) introduce *threshold samples*, purposefully mislabeled samples belonging to an extra class, to identify a AUM upper bound that isolates mislabeled data (see algorithm 2). In particular, they establish that the 99th percentile of threshold AUM values separates correctly- and mislabeled data. Notice that this mechanism would introduce additional complexity for coupling the AUM approach with FD, since we do not wish for the extra class to be present in the output distribution of the new model trained with FD. Therefore, beside testing vanilla AUM in the NLU setting, we test whether simply observing the *sign* of the AUM values is a satisfying proxy metric of the true AUM metric. Synthetic noise is injected using algorithm 2 for the former, and algorithm 1 for the latter. We can see how standard AUM is able to estimate the noise level quite accurately, with an average (absolute) estimation error of 1.71%. The simpler variant is less competitive in estimating noise levels, reporting an average estimation error of 3.70%. Interestingly, the variant consistently overestimates noise levels for the SLURP dataset in the IC setting, exhibiting a sensitivity to the ratio between label space dimension and dataset size. Indeed, moving from the DC task to the IC task, the number of samples remains constant but the label space nearly triples in dimension. While the same holds roughly true also for the INT-G dataset, its size is considerably larger than SLURP. We hypothesize this is due to the approach having to rely on fewer samples to observe training dynamics, leading to a less informative metric computation. Algorithm 2 Label noise generation when using threshold samples Input: true labels Y , noise level nl ∈ [0, 1] Output: assigned labels Y˜ , threshold samples ¯I 1: N ← | Y | 2: L ← {y | y ∈ Y } 3: Nf lip ← ⌈nl· N⌉ 4: Nthreshold ← ⌈N · (| L | + 1)⌉ 5: y¯ ← (| L | + 1) ▷ new class for threshold samples 6: ¯I *← { }* 7: for i = 0 → N*f lip* do 8: yi ← sample an item uniformly at random from Y without replacement 9: L˜ ← {y | y ∈ L ∧ y ̸= yi} 10: y˜i ← sample a label uniformly at random from L˜ 11: change yiinto y˜iin Y 12: **end for** 13: for i = 0 → N*threshold* do 14: yi ← sample an item uniformly at random from Y without replacement 15: change yiinto y¯iin Y 16: ¯I ← ¯I ∪ {i} 17: **end for** ## E Additional Results Figure 3 reports results for repeated application of FD to the IC task, in contrast with the DC task reported in fig. 2. Figures 4 and 5 present instead results for a specular setting to the "label introduction" one, in which we gradually remove data supporting a label. In figs. 6 to 9 we investigate the influence of model size on repeatedly applying FD. In particular, a tiny-bert encoder is used. Results suggest that FD becomes detrimental when the older model does not have sufficient "capacity" to accurately provide a distillation signal for the new model | Task | Dataset | 20% Noise | 40% Noise | 60% Noise | | | | |---------------|-----------|-------------|-------------|-------------|---------|---------|---------| | AUM < 99p | AUM < 0 | AUM < 99p | AUM < 0 | AUM < 99p | AUM < 0 | | | | DC | SLURP | 23.80 % | 26.42 % | 39.53 % | 42.14 % | 58.36 % | 61.89 % | | English INT-G | 23.32 % | 25.02 % | 40.25 % | 40.59 % | 57.24 % | 58.60 % | | | IC | SLURP | 21.84 % | 32.08 % | 36.87 % | 47.56 % | 57.22 % | 67.32 % | | English INT-G | 21.27 % | 21.81 % | 40.10 % | 39.77 % | 60.17 % | 61.23 % | | Table 8: Datasets noise estimation on synthetically-mislabelled dataset. In the first column, we consider standard AUM, in which we estimate that a sample is noisy when its AUM value is lower than than the 99-th percentile of the AUM values of the threshold samples. In the second column we consider the simpler variant, in which we estimate that a sample is noisy when its AUM value is negative. ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) | Task | Approach | Original | 20% Noise | 40% Noise | 60% Noise | | | | | |------------|------------|------------|-------------|-------------|-------------|------------|--------|--------|--------| | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | Accuracy ↑ | NFR ↓ | | | | Baseline | 0.9074 | 2.2416 | 0.7215 | 2.5331 | 0.505 | 5.9067 | 0.3692 | 3.5194 | | | DC | FD | 0.9069 | 1.0647 | 0.7209 | 1.9951 | 0.5399 | 1.3338 | 0.3832 | 1.2889 | | FD+AUM | 0.9077 | 1.0647 | 0.7236 | 1.7821 | 0.5365 | 2.1744 | 0.379 | 2.0287 | | | Baseline | 0.8807 | 2.5106 | 0.6518 | 4.2319 | 0.4913 | 4.1913 | 0.2887 | 4.1034 | | | IC | FD | 0.8806 | 1.8605 | 0.6117 | 4.4013 | 0.4753 | 5.154 | 0.2799 | 5.4127 | | FD+AUM | 0.8782 | 2.1968 | 0.6415 | 4.3192 | 0.4961 | 3.5982 | 0.2701 | 4.6157 | | Table 9: Absolute results for the experiment on applying FD with AUM on the noisy labels setting on SLURP. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png)
le-etal-2023-reducing
Reducing cohort bias in natural language understanding systems with targeted self-training scheme
https://aclanthology.org/2023.acl-industry.53
Bias in machine learning models can be an issue when the models are trained on particular types of data that do not generalize well, causing under performance in certain groups of users. In this work, we focus on reducing the bias related to new customers in a digital voice assistant system. It is observed that natural language understanding models often have lower performance when dealing with requests coming from new users rather than experienced users. To mitigate this problem, we propose a framework that consists of two phases (1) a fixing phase with four active learning strategies used to identify important samples coming from new users, and (2) a self training phase where a teacher model trained from the first phase is used to annotate semi-supervised samples to expand the training data with relevant cohort utterances. We explain practical strategies that involve an identification of representative cohort-based samples through density clustering as well as employing implicit customer feedbacks to improve new customers{'} experience. We demonstrate the effectiveness of our approach in a real world large scale voice assistant system for two languages, German and French through both offline experiments as well as A/B testings.
# Reducing Cohort Bias In Natural Language Understanding Systems With Targeted Self-Training Scheme Dieu-Thu Le Amazon Alexa AI deule@amazon.com Gabriela Cortes Amazon Alexa AI cortgab@amazon.com Bei Chen Amazon Alexa AI chenbe@amazon.com Melanie Bradford Amazon Alexa AI neunerm@amazon.com ## Abstract Bias in machine learning models can be an issue when the models are trained on particular types of data that do not generalize well, causing under performance in certain groups of users. In this work, we focus on reducing the bias related to new customers in a digital voice assistant system. It is observed that natural language understanding models often have lower performance when dealing with requests coming from new users rather than experienced users. To mitigate this problem, we propose a framework that consists of two phases (1) a fixing phase with four active learning strategies used to identify important samples coming from new users, and (2) a self training phase where a teacher model trained from the first phase is used to annotate semi-supervised samples to expand the training data with relevant cohort utterances. We explain practical strategies that involve an identification of representative cohort-based samples through density clustering as well as employing implicit customer feedbacks to improve new customers' experience. We demonstrate the effectiveness of our approach in a real world large scale voice assistant system for two languages, German and French through a number of experiments. ## 1 Introduction Deep machine learning models tend to inherit the bias existing in the datasets used for training (Manzini et al., 2019) (Zhao et al., 2017). For example, GPT-3 a state of the art in contextual language model, showed bias regarding religion, race and gender (Brown et al., 2020). Even though deep learning models are trained on large amounts of data, it is hard to capture all the variations of the language that different users can use. Even within the same language people talk differently, depending on the age group, part of the country, background, etc (Kern et al., 2016) (Eisenstein et al., 2010) (Hovy and Søgaard, 2015). If the training data is skewed towards a certain demographic group, this can cause models to pick up on patterns that do not generalize and underperform on certain user groups. Bias on predictive models is an issue that has been studied for some time. Most of the related literature is focused on social bias, specially gender and race (Zhao et al., 2017) (Manzini et al., 2019) and centered on measuring an specific type of bias and providing contra measures for it, which usually do not generalize to other types of bias (Zhao et al., 2018) (Goldfarb-Tarrant et al., 2020) (Garrido-Muñoz et al., 2021) (Dixon et al., 2018) (Shah et al., 2020). For example, on digital assistants, we identify other types of group bias, like customer tenure. Everyday, new customers join services like Amazon Alexa, Siri or Google Home. These new customers experience digital assistants for the first time and interact with it differently than mature cohort. New customers tend to try out more different functionalities, while mature customers often use utterances that work for them and settle down in daily-related domains. The experience of new customers is a closer reflection of how natural communication looks like as they are not yet "taught" how to communicate with the devices. Learning from new customers therefore might be one of the best ways to learn natural interactions with digital assistants. Contrary to most studies that focus on using semi-supervised learning for general accuracy (Chapelle et al., 2009a) (Clark et al., 2018) (Ding et al., 2018) (Hinton et al., 2015), we focus on improving the accuracy of the new customer (early cohort) natural language understanding task (McClosky et al., 2006) and show that our framework could target a strategic customer cohort to improve their experiences, thus improve the overall accuracy in all customers in an industry scale experiment. Even so, our approach can be easily applied to any customer cohort to mitigate other types of bias. Our proposal consists in a method to identify important utterances coming from the early cohort that need to be fixed, then em552 ploy self training techniques to mitigate them. The main idea is to automatically expand the training data to increase the representativeness of utterances that characterize early cohort customers. ## 2 Related Work Detecting and mitigating bias in model predictions have attracted a lot of studies recently. For example, (Zhao and Chang, 2020) proposed a bias detection technique based on clustering. Their approach focuses in local bias detection, which refers to bias exhibited in a neighborhood of instances rather than on the entire data. (Garrido-Muñoz et al., 2021) did a survey on bias in deep NLP, where they present a review of the state-of-the-art in bias detection, evaluation and correction, where they used vector space manipulation (Bolukbasi et al., 2016), data augmentation, data manipulation or attribute protection for dealing with the bias. (Shah et al., 2020) proposed a predictive bias framework for NLP and identified four potential origins of biases: label bias, selection bias, model over-amplification, and semantic bias. To mitigate model bias, common methods such as adversarial learning (Li et al., 2018; Le et al., 2022b), data augmentation with synthetic data generation using back translation (Sennrich et al., 2016), pretrained language model (Sahu et al., 2022; Wang et al., 2021; Kobayashi, 2018; Kumar et al., 2019; Le et al., 2022a) and semi-supervised learning (Cho et al., 2019; Zhu, 2005; Zhu and Goldberg, 2009; Chapelle et al., 2009b) have shown to be effective, especially when there is a lack of labeled data. While most of these studies focus on general biases in training models, we specifically aim at new customer cohort in a real world large scale voice assistant system. We employ both active and semisupervised learning approaches that take customer feedback into consideration to improve the model prediction on this specific cohort. ## 3 **Natural Language Understanding Task In** Early Cohort Early cohort is defined as a group of new customers who have started to use the voice assistant device within the last 7 days. In contrast, **mature** cohort refers to the group of customers that have used the device for at least more than 30 days. Typically, a voice assistant consists of different components, starting from WakeWord detection, to Automatic Speech Recognition (ASR) that converts voice signals to texts, which will be used by the Natural Language Understand (NLU) component. In this work, we focus on how improving the NLU part could help to improve the end to end experience of new customers. In this study, we use devices' response results and weak signals as a way to improve the system over time. In particular, friction is defined as commands from customers that the system failed to provide an answer to (e.g., when the system gives responses such that "sorry I do not understand"). We also consider **negative** feedback from customers as a signal that the system did not response well to their previous requests. Finally, in order to measure the impact of our approach, we carried out offline NLU experiments (testing on annotated data). The only change is the implemented early cohort self training scheme. ## 4 Our Approach We propose an end to end framework (Figure 1) to identify cohort representativeness and effective data selection and augmentation to improve the model performance on a specific cohort without degrading the overall performance. It is composed of two phases, with the first phase looks for utterances from early cohort that need to be fixed using active learning using different strategies. After these utterances are annotated with human annotators, they are included in the training data to train a new NLU teacher model. We then employ a self learning phase to further extend similar utterances using semi-supervised learning to have more representatives of samples coming from early cohort. ## 4.1 Active Learning Strategies We define phase I with active learning strategies to fix important utterances from early cohort that the model might struggle with. The aim is to select all utterances with the highest values to be annotated to improve the performance of the NLU system with a given budget of ζ annotated utterances. Let L = {(xi, yi)} |L| i=1 be a set of labeled training data that is currently used for the NLU model F with xi ∈ XL, a set of all labeled utterances including customer and/or synthesis utterances. We have U = {(xi, y′i )} |U| i=1 as a set of unlabelled data, which contains xi ∈ XU , a set of all unlabelled utterances. yi denotes labels from human annotators while y′i denotes the annotation coming from NLU model F. y′i contains the first hypothesis from F model and additional information about the ![2_image_0.png](2_image_0.png) corresponding utterance such as whether it belongs to the early cohort EARLY (i.e., utterances that occur during the 30 first days of the customers), the frictional group FRICTION or the LOW bin (i.e., utterances that have low confidence scores during NLU prediction). To select utterances that are relevant and important for new customers, we employ four different sampling strategies with a set of acquisition functions a = ⟨a (1), a(2), a(3), a(4)⟩ to select a set of T utterances to be annotated with T = {(xi, yi)|xi ∈ X} |ζ| i=1. The goal is to find the set of all X = {xi|xi *∈ U}* that provides the best model's performance P(F′) of model F′that is trained on L′ = *L ∪ T* . We cover (1) difficult utterances (uncertainty sampling), (2) wide coverage (diversity sampling) as well as (3) utterances that are representative of new customers (cohort-representativeness sampling) and finally (4) using customer feedbacks as an additional signal to trace back problematic utterances. The selection function gives us a set of classified utterances focusing on early cohort. The final set is the union of all four strategies (Algorithm 1). ## 4.1.1 Uncertainty And Diversity Sampling As a common approach in active learning, the first sampling strategy is to query for utterances that have low NLU confidence scores and utterances where texts are similar, but NLU hypotheses are different. Those are utterances that the model are unsure about its predictions. For diversity sampling, we select representative frictional utterances using k-means clustering, extracting the centroid of each cluster to get a set of representative broken utterances from early cohort. ## 4.1.2 Identification Of Cohort Representativeness To get a visualization on a target cohort friction data, we propose the following approach that takes into account contextual information embedded in BERT representations together with hidden topic modeling (Blei et al., 2003). While BERT embedding provides contextual information about how words are interacting and accompanying each other, topics project utterances to a hidden topical space that is easy for interpretation. We then compare the density area for each cohort with topic guidance to identify the areas that are representative of friction utterances from early cohort. The process is composed of two main steps (1) Step 1: inspired from BERT topic combination (Bianchi et al., 2021), we perform parameter estimation and data fitting, where the LDA (Latent Dirichlet Allocation) (Blei et al., 2003) topic estimation, Auto Encoder (Liou et al., 2014) and Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) are learned from generic frictional data. Note that this training phase is completely unsupervised, only the first NLU hypothesis domain labels are integrated into the training data for a better domain focus representation. (2) Step 2: Topic inference with BERT representation and transforming friction data from different cohorts (e.g. early and mature cohort) separately through Autoencoder and UMAP, we use density clustering with topic guidance to identify the areas that are representative of friction utterances from the early cohort. Training with the original data gave rather poor results since the friction data is very unbalanced with main focus on the bigger domains (e.g Music and Knowledge). LDA is not able to capture correctly other domains and classes when training on original data, but gave a much better results after upsampling minority classes. Furthermore, integrating NLU domain label hypotheses gives another dimension of information, hence improve domain focus and give a better labelling for interpreting topics. In the inference phase (Figure 2), friction data is included in its original distribution (e.g., no resampling is used). Early and mature cohort friction data are fed separately into the models. Before doing UMAP transformation, we employ density clustering with topic guidance to extract utterances that most characterize early cohort (i.e., are often asked by the early cohort and gave them frictions in compared to mature cohort). The visualization of early and mature cohort give insights into which topics are mostly asked, identifying domains that are usually confused to each other, top words that are used in each domain/topic that lead to friction. This helps to understand which types of requests from new customers need to be fixed. ## 4.1.3 Using Customer Feedback Inputs In this sampling approach, we look at utterances from new customers that might contain negative feedbacks (NF). To this end, we employ a binary classifier that predicts whether an utterance contains a negative feedback (e.g., "this is not what I meant", "you did not understand it"). If it is likely that an utterance contains a negative feedback, we trace back to the previous de-identified utterance that might have led to the negative feedback. This is the fourth sampling strategy used for querying utterances for active learning. ## 4.2 Data Augmentation With Self-Training Scheme Since the budget of ζ annotated utterances is limited, we want to combine SSL together with data augmentation as the second step for enriching the training data with utterances that best solve the problems of young cohort. Many recent studies have shown that augmented data with semisupervised learning (Chapelle et al., 2006) can boost the performance of text classification tasks with reduced number of annotated data. We integrate them together with the utterances selected for annotation in Phase I in a self-training scheme to select best utterances that can be augmented into the training set. In particular, the process consist of the following steps: 1. Take all data coming from T and with the output NLP model F′, retrained in Phase I. 2. F′runs on a new unlabelled set of utterances to achieve H1 and their scores. 3. Construct the set Tssl that contains all selected utterances that are most similar to those coming from T with the highest confidence to be added to the training data with a data retrieval module based on similarity search. Figure 3 shows how the SSL data selection works, where we search for early cohort most relevant utterances from the live traffic. Due to the large scale of the data, it is prohibitively expensive to search for relevant utterances from the deidentified live traffic data using pair wise similarity search. Therefore, we encoded and indexed all utterances once, clustering the data where each cluster is represented by their centroids, which are used as inverted file and indices (Johnson et al., 2019). For each of the selected early cohort utterances (that were annotated in Phase I), we find those that are most semantically similar (but are not identical). When a query vector comes in, a most suitable cluster found based on its similarity with the centroids is returned together with the top K-nearest utterances coming from the live traffic data. ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ## 5 Experiment Setup 5.1 Models And Dataset We took aggregated and de-identified data for evaluating our framework for both German (DE) and French (FR) languages. The offline results are evaluated on human annotated test set that comes from the live traffic distribution. For the first phase we got 8K annotated utterances. For the second phase, we further enriched with ∼13K utterances using semantic retrieval for SSL. The offline results are reported with a sample test set containing 1M samples for DE and 800K samples for FR. ## 5.2 Metrics We report our offline results in semantic error rate (SEMER), which is calculated by the number of errors (at slot and intent level) divided by the total number of reference slots and intent classification error rate (ICER), which takes only intent classification error into consideration (see A.2 for more information). ## 6 Results We report offline results testing on annotated test data. Table 1 shows the relative changes of each phase in compared to a baseline model for both ICER and SEMER metrics. We observe a constant improvement across domains for both phases in German (DE) and French (FR) languages. In particular, the biggest gain (6.99% intent error and 6.1% semantic error reduction) is observed in German second phase, where we include semi-supervised learning with focus on early cohort. Among all domains, we see especially good improvements in Help, Notifications and Knowledge domains. These domains are also popular domains among new customers, who tend to try out different functionalities and require support (Help) from the devices to understand how to use them. We see also some small degradation (0.29% in ICER for French phase II), but no degradation with SEMER metric, when we take also slot information into consideration. Overall, the offline results show that both active and semisupervised learning are effective in improving the performance of the model. ## 7 Conclusions In this work, we provide an end to end framework for bias mitigation with a focus on early cohort. This framework is also general enough to apply to other customer cohorts and other types of bias. Our approach uses a combination of active and semisupervised learning techniques in a self-learning scheme for effective data selection and augmentation. Our main contribution is the identification | DE Phase I | FR Phase I | DE Phase II | FR Phase II | | | | | | |----------------|--------------|---------------|---------------|--------|---------|---------|--------|--------| | Domain | ICER | SEMER | ICER | SEMER | ICER | SEMER | ICER | SEMER | | Music | -1.48% | -0.99% | 1.29% | 1.36% | -2.74% | -1.82% | -0.59% | +2.13% | | Global | +2.31% | +1.63% | -1.08% | -1.78% | -2.23% | -1.95% | 0.96% | +0.15% | | HomeAutomation | -0.61% | -0.20% | -1.13% | -0.4% | -0.79% | -1.27% | 0.97% | +1.64% | | Knowledge | -0.13% | -0.56% | -1.49% | -3.41% | -5.44% | -5.16% | 2.89% | -1.99% | | Notifications | -1.16% | -0.20% | -1.8% | -1.99% | -41.11% | -32.79% | -1.23% | -0.8% | | Communication | +0.00% | -2.44% | -3.5% | -4.01% | -3.51% | -1.48% | 0% | -3.69% | | LocalSearch | +1.59% | +1.52% | 1.46% | -1.65% | -1.17% | -0.71% | -0.35% | +0.69% | | Help | -1.46% | -1.93% | -5.21% | -5.31% | +1.08% | -3.62% | -0.33% | +1.12% | | Overall | -0.12% | -0.40% | -1.43% | -2.15% | -6.99% | -6.10% | 0.29% | -0.09% | ![5_image_0.png](5_image_0.png) of the cohort representativeness where we use a combination of BERT topic embeddings with Autoencoder and density clustering to create a better representation of each cohort data and identify the contrastive area, where the new customers' data is missing. Furthermore, we applied SSL using a data retrieval module based on similarity search to augment the training data relevant to the early cohort. We compared a model that was trained on a random set of data with a model that was selected based on the active semi supervised learning approach. The proposed approach improves overall semantic and intent error rate for both German and French languages during offline testing. ## Limitations In this work, we have employed different strategies to identify the important utterances from early cohort. However, since a voice assistant system consists of many components, such as Wakeword, automatic speech recognition, NLU, dialogue manager, where errors occurring in one step might result to the final overall incorrect response. We have not discussed or considered the interaction among these components in this study. Last but not least, weak signal learning using users' feedbacks has shown to be beneficial in many studies, it is important to classify and identify the types of feedbacks that are relevant and those that are not relevant to NLU improvement (e.g., a negative feedback might not be caused by an immediate previous request, but be caused by other factors such as unsupported features, ASR incorrect recognition, device technical problems). ## Ethics Statement In the self learning phase, we have increased the representativeness of early cohort utterances in the training data. While it helps to improve the end to end experience of new customers, the method described in this work focuses on improving common customers and potentially introduces bias into the training data as well as the model. For example, minor customers that are not well represented in the live traffic will have lower chances of having their types of requests fulfilled through the active semi-supervised learning phased. Similarly, certain types of customers (e.g., those who use the device frequently) may have better chances of having correct NLU predictions overtime, while the system might still struggle dealing with rare requests in some specific domains. ## References Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, and Elisabetta Fersini. 2021. Cross-lingual contextualized topic models with zero-shot learning. In *Proceedings of the 16th Conference of the European Chapter of the Association for Computational* Linguistics: Main Volume, pages 1676–1683, Online. Association for Computational Linguistics. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. *Journal of machine* Learning research, 3(Jan):993–1022. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. O. Chapelle, B. Scholkopf, and A. Zien, Eds. 2009a. Semi-supervised learning (chapelle, o. et al., eds.; 2006) [book reviews]. *IEEE Transactions on Neural* Networks, 20(3):542–542. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. *IEEE* Transactions on Big Data, 7(3):535–547. Margaret L. Kern, Gregory J. Park, Johannes C. Eichstaedt, H. A. Schwartz, Maarten Sap, Laura K Smith, and Lyle H. Ungar. 2016. Gaining insights from social media language: Methodologies and challenges. Psychological methods, 21 4:507–525. Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. 2009b. Semi-supervised learning. *IEEE Transactions on Neural Networks*, 20(3):542–542. Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien, editors. 2006. *Semi-Supervised Learning*. The MIT Press. Eunah Cho, He Xie, and William M Campbell. 2019. Paraphrase generation for semi-supervised learning in nlu. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 45–54. Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. Cheng-Yuan Liou, Wei-Chen Cheng, Jiun-Wei Liou, and Daw-Ran Liou. 2014. Autoencoder for words. Neurocomputing, 139:84–96. Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sanchez, Mugdha Pandya, and Adam Lopez. 2020. Intrinsic bias metrics do not correlate with application bias. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In *Proceedings of the Human Language Technology Conference* of the NAACL, Main Conference, pages 152–159, New York City, USA. Association for Computational Linguistics. Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. Dirk Hovy and Anders Søgaard. 2015. Tagging performance correlates with author age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 483–488, Beijing, China. Association for Computational Linguistics. Gaurav Sahu, Pau Rodriguez, Issam Laradji, Parmida Atighehchian, David Vazquez, and Dzmitry Bahdanau. 2022. Data augmentation for intent classification with off-the-shelf large language models. In Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In *Proceedings of the 2018 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457. Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914–1925, Brussels, Belgium. Association for Computational Linguistics. Dieu-Thu Le, Jose Garrido Ramas, Yulia Grishina, and Kay Rottmann. 2022a. De-biasing training data distribution using targeted data enrichment techniques. In *KDD 2022 Workshop on Deep Learning Practice* and Theory for High-Dimensional Sparse and Imbalanced Data (DLP). Yifan Ding, Liqiang Wang, Deliang Fan, and Boqing Gong. 2018. A semi-supervised two-stage approach to learning from noisy labels. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In *Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics,* and Society, AIES '18, page 67–73, New York, NY, USA. Association for Computing Machinery. Hieu Le, Dieu-Thu Le, Verena Weber, Chris Church, Melanie Bradford Kay Rottmann, and Peter Chin. 2022b. Semi-supervised adversarial text generation based on seq2seq models. In *EMNLP 2022*. Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277–1287, Cambridge, MA. Association for Computational Linguistics. Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martínez-Santiago, and L. Alfonso UreñaLópez. 2021. A survey on bias in deep nlp. Applied Sciences (Switzerland), 11. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Proceedings of the 4th Workshop on NLP for Conversational AI, pages 47–57, Dublin, Ireland. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*. Association for Computational Linguistics. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021. Towards zero-label language learning. Jieyu Zhao and Kai-Wei Chang. 2020. Logan: Local group bias detection by clustering. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. Xiaojin Zhu and Andrew B Goldberg. 2009. Introduction to semi-supervised learning. *Synthesis lectures on artificial intelligence and machine learning*, 3(1):1–130. Xiaojin Jerry Zhu. 2005. Semi-supervised learning literature survey. ## A Appendix A.1 Algorithm For Sampling Strategies Algorithm 1 Sampling strategies for early cohort Input: current NLU model release F set of recent live traffic data U = {(xi, y′i)} |U| i=1 K: the number of clusters used for diversity sampling strategy Output: A boosting model F ′built on top of F 1) Initialize T = ∅ 2) From a set of live traffic data U, select utterances to be annotated with acquisition functions a = ⟨a (1), a(2), a(3), a(4)⟩ 3) **Uncertainty sampling** Select X (1) using a (1) that selects utterances with (1) low confidence from early cohort that causes friction or (2) have different annotations while the utterance texts are the same. a (1)(xi) = 1, (xi, y′i) ∈ EARLY, FRICTION, (xi, y′i) ∈ LOW or ∃(xk, y′k) where xi = xk, y′i ̸= y ′k 0*, otherwise* Send X (1) for human annotation to get T (1) = {(xi, yi)|xi ∈ X (1)} |X(1)| i=1 4) **Diversity sampling** Using k-means algorithm on a set of U YF = {(x_i, y_i') | (x_i, y_i') ∈ EARLY,(xi, y′i) ∈ FRICTION} |U i=1 YF|K*isthenumberofclustersandζ* (2) is the utterance budget for annotation Assign initial values for E(x1), E(x2), ..., E(xUY F ) repeat assign each item E(xi) to the cluster with the closest centroid; calculate new centroid for each cluster until converge For each cluster, select ζ (2)/K representative utterances to be added to X (2) Send X (2)for human annotation to get T (2) = {(xi, yi)|xi ∈ X (2)} |X(2)| i=1 5) **Using customer feedback inputs** Let UNF = {(xi, y′i)|(xi, y′i) ∈ EARLY,(xi, y′i) ∈ PNF_i = 1|UPNF| where PNF = {(xi, y′i)|(x next i, y′i next) ∈ EARLY,(x next i, y′i next) ∈ NF} |UNF| i=1 defines a group of all previous utterances of those that are classified as containing negative feedbacks. Send X (3) for human annotation to get T (∋) = {(xi, yi)|xi ∈ X (3)} |X(3)| i=1 6) **Using cohort representative data with density clustering** Using BERT and LDA to define an embedding function of each utterances coming from both early and mature cohort Use density clustering to define clusters where early cohort out-populates mature cohort in density to get X (4) Send X (4) for human annotation to get T (4) = {(xi, yi)|xi ∈ X (4)} |X(4)| i=1 7) **Train a new model** F ′on L ′ = *L ∪ T* **on top of** F The algorithm for sampling strategies in the first phase is given in Table 1, where the aim is to select utterances with highest values for early cohort. The final set of the utterances is the union of the four sampling strategies. ## A.2 Metrics Semantic error rate (SEMER), is a metric used in offline evaluation where model prediction on domain/intent/slots is compared to human annotations. SEMER considers substitution error (S), insertion error (I), deletion error (D) at intent and slot level, and number of correct intents/slots classification (C) (see Equation 1). $$\begin{array}{l}{SEMER=\frac{\#errors}{\#referenceslots}}\\ {=\frac{(S+I+D)}{(C+I+S+D)}}\end{array}\tag{1}$$ ICER stands for Intent classification error, a metric calculated as the percentage of utterances containing an error intent classification error divided by total number of samples in this intent. BPS represent the percent increase/decrease from the current value. Friction refers to instances where a model does not understand the user or can not action the user's request.
mullick-etal-2023-content
Content Moderation for Evolving Policies using Binary Question Answering
https://aclanthology.org/2023.acl-industry.54
Content moderation on social media is governed by policies that are intricate and frequently updated with evolving world events. However, automated content moderation systems often restrict easy adaptation to policy changes and are expected to learn policy intricacies from limited amounts of labeled data, which make effective policy compliance challenging. We propose to model content moderation as a binary question answering problem where the questions validate the loosely coupled themes constituting a policy. A decision logic is applied on top to aggregate the theme-specific validations. This way the questions pass theme information to a transformer network as explicit policy prompts, that in turn enables explainability. This setting further allows for faster adaptation to policy updates by leveraging zero-shot capabilities of pre-trained transformers. We showcase improved recall for our proposed method at 95{\textbackslash}{\%} precision on two proprietary datasets of social media posts and comments respectively annotated under curated Hate Speech and Commercial Spam policies.
# Content Moderation For Evolving Policies Using Binary Question Answering Sankha Subhra Mullick, Mohan Premchand Bhambhani, Suhit Sinha, Akshat Mathur, Somya Gupta, Jidnya Shah LinkedIn, India {smullick, mbhambha, ssinha3, amathur, sgupta9, jidshah}@linkedin.com ## Abstract Content moderation on social media is governed by policies that are intricate and frequently updated with evolving world events. However, automated content moderation systems often restrict easy adaptation to policy changes and are expected to learn policy intricacies from limited amounts of labeled data, which make effective policy compliance challenging. We propose to model content moderation as a binary question answering problem where the questions validate the loosely coupled themes constituting a policy. A decision logic is applied on top to aggregate the themespecific validations. This way the questions pass theme information to a transformer network as explicit policy prompts, that in turn enables explainability. This setting further allows for faster adaptation to policy updates by leveraging zero-shot capabilities of pre-trained transformers. We showcase improved recall for our proposed method at 95% precision on two proprietary datasets of social media posts and comments respectively annotated under curated Hate Speech and Commercial Spam policies. ## 1 Introduction Social media platforms use content moderation to safeguard users from abuse, harassment, malicious attacks, spam, etc. This moderation process is governed by a set of community policies1. For example, to shield the users from undesired spammy advertising of illegal products/services, social media platforms generally maintain a Commercial Spam (CS) policy2. The large volume of content 1The professional community policy maintained by LinkedIn https://www.linkedin.com/legal/ professional-community-policies or Facebook Community standards https://transparency.fb.com/en-gb/ policies/community-standards/ 2 LinkedIn's illegal, dangerous, and inappropriate commercial policy: https://www.linkedin.com/help/ linkedin/answer/137373 and similarly Facebook's commerce policy: https://www.facebook.com/policies_ center/commerce/ generated on social media platforms necessitates building automated systems for content moderation to scale policy-specific validations. (Fortuna and Nunes, 2018; MacAvaney et al., 2019). Traditionally, automated content moderation systems are mostly binary classifiers (Hovold, 2006; Sakkis et al., 2001) often aided by pre-processing (Naseem et al., 2021) and additional tasks such as intent identification (Agarwal and Sureka, 2017). Recent approaches involve fine-tuned Large Language Models (LLMs) (Caselli et al., 2020; Tan et al., 2020), putting attention on suspicious pieces of text (Pavlopoulos et al., 2017), or reformulating the problem as multi-task learning (Kapil and Ekbal, 2020) or natural language inference (Yin et al., 2019; Goldzycher and Schneider, 2022). However, policy compliance in automated content moderation still remains a challenge due to two primary reasons: (1) The governing policies are likely to contain intricacies arising from various aspects like content-specific edge cases, context driven interpretations, and exceptions. For example, Table 1 documents a typical Hate Speech policy (prohibiting hateful contents targeting inherent traits such as gender, race etc.) that can have complicated samples where decision making is difficult. (2) To keep up with world events and their direct impact on content distribution, policies may need to be updated somewhat frequently. The common industry practice considers policy as a single atomic concept and formulates content moderation as binary classification problem. Here the policy appears to the classifier as a black-box abstract concept yet it is expected to learn even the minute intricacies of the policy only through the labeled data. This leads to three major production challenges: (1) Labeled data are limited in quantity. (2) The non-stationary distribution of content on social media continuously evolves in response to world events resulting in label and concept drift (Gama et al., 2014; Yamazaki et al., 2007). (3) Table 1: The intricacies inherent to an example content moderation policy such as Hate Speech. ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) Table 2: Example of update in commercial spam policy. There is no direct way to reuse an existing model following policy update. Instead, one has to reannotate data for the updated policy and develop a fresh model. From industry perspective, this incurs additional labeling and development cost leading to compliance delays that leaves the user on the platform less protected for a prolonged period. To get a better understanding of content policies we take an example CS policy that prohibits advertising/selling of illegal products from any of the three categories (hereafter called themes), namely Human Body Parts, Recreational Drugs, and Cryptocurrency (see Table 2). Thus, a policy can be seen as a collection of loosely coupled themes (the smallest, logically coherent, and well defined granularity of a policy) threaded together by a decision logic (here if the content violates any of the themes it will be marked as commercial spam). Breaking down a policy into themes has two benefits. (1) Themes are independent and focused thus they tend to be less ambiguous. (2) A policy update boils down to addition of new themes or removal of old themes with changes in the decision aggregation logic. For example, an updated CS policy may clear Cryptocurrency and introduce Pharmaceuticals as a new prohibited item (see Table 2). When we consider the policy as a set of themes combined by a decision logic, it enables us to formulate the task of policy compliance as a binary Question Answering problem (Clark et al., 2019) that leverages a pre-trained Large Language Model (LLM) as described in Figure 1. This formulation has four advantages. (1) The theme information can be passed to the LLM in the form of explicit prompts, in this case, binary questions (answered Yes or No). This is similar to a prompt-based learning (Liu et al., 2023) approach that enables a better understanding of the policy. (2) Prompting enables leveraging zero-shot capabilities of LLMs for understanding the question-content relation to validate less prevalent or newly added themes with no ![1_image_2.png](1_image_2.png) or few data samples. (3) The decision logic gets decoupled from the model. This simplifies learning and enables fast adaptation to policy changes. (4) The individual theme validations provide explainability useful for fine-grained monitoring and performance tuning (can be used for transparency and fairness requirements for social media). The key highlights of this paper are as follows:. (1) In Section 3, we propose a binary Question Answering based Content Moderation (QnA-CM) system. Here, we leverage the policy structure that allows reformulating the problem of content moderation as a generic task of binary QnA. Going beyond Clark et al. (2019); Saeidi et al. (2021a) that deal with more syntactical and factual questions, with QnA-CM we aim to answer semantically involved theme-validations. (2) Contrary to BoolQ (Clark et al., 2019) in QnA-CM to maintain diversity and limit class imbalance in the training set, we undertake a sampling strategy detailed in Section 3. (3) We further propose a scalable multi-level inference strategy in Section 3 that enables QnACM to perform at near computational cost of binary classifiers while offering greater explainability. (4) Using questions QnA-CM leverages explicit policy knowledge and consequently gains agility to policy changes in a zero-shot manner, as demonstrated in a simulation study in Section 4. ## 2 Related Works Content moderation systems usually employ binary (Sakkis et al., 2001) or multi-class classifiers (Founta et al., 2018) to label content. A binary classifier ignores themes altogether by treating policy as a black box seen through the lens of spam ![2_image_0.png](2_image_0.png) and clear labels. Even though a multi-class classifier may consider themes it is not agile to policy changes. With the advent of LLMs (Devlin et al., 2018), models like TextCNN (Kim, 2014), XGBoost (Chen and Guestrin, 2016) using external word embeddings like Glove (Pennington et al., 2014) have been outperformed. Notable LLMbased content moderation systems, primarily designed for Hate Speech detection, usually fail in production due to language-specific interjections (Nozza, 2021), limited data availability (Uzan and HaCohen-Kerner, 2021), demand for fine grained subjective labels (Mollas et al., 2022), theme imbalance (Plaza-Del-Arco et al., 2021), and high response time (Goldzycher and Schneider, 2022). A pre-trained LLM can be fine-tuned to perform text understanding tasks such as binary question answering as in BoolQ (Clark et al., 2019). However, BoolQ is trained to answer in Yes or No responses to content-specific factual questions thus cannot be directly applied to the task of policy compliance. Even though BoolQ inspired question answering along with rule based deductive reasoning have found some success in content validation (Saeidi et al., 2021b; Saeed et al., 2021), they use simple clearly defined policies and did not investigate the applicability in content moderation that requires answering semantically involved questions. To elaborate, policies governing content moderation often use legal language that are difficult to process by LLMs (Moro and Ragazzi, 2022; Khazaeli et al., 2021; Ravichander et al., 2019). For short, focused, and well defined insurance policies, expressing the rules as decision trees may be useful (Kotonya et al., 2022) but that neither extends to capture the intricacies in social media content moderation policies or formally characterize their updates. ## 3 Methodology Preliminaries: Let us take a set X = {x1, x2, *· · ·* , xn} of n text contents paired with a set of labels YP = {*Spam, Clear*} n where P is the underlying policy. Due to the likely imbalance between s Spam and c Clear contents n = c + s and c = rs where r > 1 is the imbalance ratio. A policy P as mentioned in Section 1 usually consists of a set of themes T and a decision logic (combination of logical operators like AND, OR, etc.) D to combine the theme-specific validations to reach a final Spam or Clear label or P = (*T, D*). To elaborate, content policies in social media typically have a primary theme of "common intent" in T. For example, in CS this can be "sale of prohibited items" or in hate speech a "hateful sentiment". Evidently, if this primary theme is violated then only it makes sense to proceed with the checks on the other themes for finding a spam. Each of the other themes individually covers a certain "specific" rule under the policy such as a regulated product like recreational drugs in CS and inherent traits like gender in hate speech. The policy provides the decision tree D that in its commonly preferred form marks a content Spam if "common intent" is violated along with (logical AND) any (i.e. logical OR) other specific theme is contravened. Now P when circulated to the user, should phrase T as a guideline to assist the user in creating good quality content. However, when P is provided to human reviewers, T can be rephrased as questions Q as that is more intuitive for validating a content. For example, a publicly circulated Hate Speech policy may state "Please do not create hateful contents that target the inherent traits of an individual or group." To a reviewer this may be rephrased as a set of binary answerable questions Q along with a decision logic D to aggregate the answers, as shown in Figure 2. Similarly, in QnACM mimicking a human reviewer we rephrase T as Q such that the answers of Q can be logically combined by D. Formally, for a policy with k themes the set Q = {q1, q2, *· · ·* , qm} contains m validatory questions. Here, m ≥ k and equality is achieved when each theme has exactly one validating question. The decision tree D is a boolean function that maps from {0, 1} m to {0, 1} using logical operators. We are representing Yes as 1 and No as 0 to match the implementation while ⊗ and ⊕ respectively denote the logical operators AND and OR. Binary Question Answering Using LLMs: We follow from LLM classifiers (Devlin et al., 2018) and BoolQ (Clark et al., 2019), for validating a content x ∈ X against a question q ∈ Q. The input i concatenates [CLS], q, [SEP], and x in the order, where [CLS] and [SEP] are special tokens. The output f(i) of the LLM f summarizes the input i in the feature space at f i[CLS] i.e. the dimension corresponding to the [CLS] input token (Devlin et al., 2018). This f i[CLS] is sent through fully-connected layers to map to the two classes (Clark et al., 2019). After applying softmax to the logits, the model will output probability scores P r(1|(q, x)) and P r(0|(q, x)) respectively for 1 (Yes) and 0 (No) responses. A threshold θ converts the probability scores to binary labels. This can be trained end-to-end with a loss such as binary cross-entropy. Figure 4 illustrates the architecture. Training of QnA-CM: In the QnA-CM training set each sample is a question-content pair with 1 (Yes) or 0 (No) label. To form such a training data we ask the same set of questions Q to every content x in X. However, this may result in severe class imbalance depending on r and m. For simplicity without loss of generality, let us assume k = m, a spam content violates only one theme, and a positive theme violation corresponds to a Yes answer only. Thus, we have a total of sm(r + 1) questions in the training data, where s of them are answered by Yes and the rest (s(m − 1) from spam and rsm from clear) are answered by No, resulting in an imbalance of (m + mr − 1). A naive solution of sampling random No answering question-content pairs may not provide a quality training set. We sample diverse question-content pairs with label 0 (No) in three ways: **(S1)** Pair a Clear sample with a random question with probability νn and assign it a 0 (No) label. **(S2)** Take a Spam x that answers Yes to qj. Pair x with probability νs with any q ∈ Q \ {qj} and label it as 0 (No). **(S3)** Use theme-specific weak classifiers using models like TextCNN or pre-trained natural language inference models like BART (Lewis et al., 2019) to find the qj with highest confidence (above ω) that matches with a Clear sample x. With a probability νh, pair x with qj and label it as 0 (No). The "common intent" of a policy is expressed through the top-level spam and clear labels. To utilize this additional information and learn the commonalities across themes to aid zero-shot generalization, we sample Spam and Clear contents respectively with probability ν+ and ν− and pair them with a "common intent" validating question in the training set. Note that, in the process of building the training set, we introduce five new data dependent hyperparameters in QnA-CM, namely νs, νh, νn, ν+, ν−, and ω. Inference using QnA-CM: We propose a scalable multi-level inference strategy for QnA-CM as described in Figure 3. Here we exploit two peculiarities of the content moderation ecosystem. (1) Spammy content is commonly very less frequent than clear content. (2) In our two-level inference strategy. In the first level L1, we match the content against a single question representing the common intent of the policy. Only if the content is matched in L1, we proceed to the second level of L2 to validate it against all the m theme-specific questions, otherwise we directly mark it as clear. This way only the potential spammy content will be validated against all m theme-validating questions while the rest will be cleared in L1 with a similar computation cost of a binary classifier. In other words, the computational overhead will only be zm/(1+r) times in practice where z is the ratio of potential spam to actual spam content. Typically in production *r >> z* (tuning distinct θs for L1 and L2 offers finer control over z) and m is not large thus zm/(1 + r) remains close to 1, thus asserting ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ## 4 Experiments Experimental Protocol: We use two proprietary long-tailed text datasets, both sampled from content publicly posted on social media during 2021-2023 (both are sampled from the same social media to maintain consistency). **(D1)** Comments in four languages namely English, Spanish, French, and Portuguese. **(D2)** Text part of Feed Posts made in English. For our experiments, we curate two typical policies by selectively amalgamating the ones used by various social media platforms. **(P1)** A CS policy that contains 17 themes (described in terms of Q and D in Appendix A.2). **(P2)** A Hate Speech policy that employs hateful Q and D as shown in Figure 2. The D1 dataset is validated against P1 while D2 is labeled with P2 (datasets are detailed in Appendix A.3). We take 4 English and 2 multilingual pre-trained LLMs for our experiments (listed in Appendix A.4 with additional network architecture and hyperparamater details). For all the experiments we compare a fine-tuned LLM-based baseline binary classifier that maps the [CLS] token embedding of the backbone to Spam or Clear labels by a multi-layer perceptron network againstst a QnA-CM model with the same backbone. Content moderation systems in production aim to achieve better recall with high precision such that the users (and reviewers) are minimally affected by false positives. Thus, we use recall value at 95% precision level for comparing among contenders (note that accuracy is not suitable given the class imbalance while GMean is not informative as content moderators do not need to focus on true negatives (Mullick et al., 2020)). We demonstrate performance of QnA-CM across two dimensions. (1) To train with the full dataset, we retain all the themes in training, validation, and test set mimicking a long standing static policy. Here we aim to evaluate how well QnA-CM employs prompting through questions to handle the long tallied theme distribution. (2) For the CS policy, we simulate a policy change (Hate Speech policy is commonly static as the inherent traits are well defined) where new themes are introduced in the policy (see Appendix A.5). Thus, we train the model only on previously existing themes while inferring on newly introduced ones to evaluate how QnA-CM employs prompting and zero-shot capability of LLM to adapt to policy update. | LLM | Full Data | Policy Change | Full Data | |---------------|-------------|-----------------|-------------| | (CS) | (CS) | (Hate Speech) | | | Baseline | 41.36 | 11.38 | 78.76 | | QnA-CM (Ours) | 48.17 | 33.68 | 85.90 | | Full data (Precision, Recall) Cryptocurrency Occult | Precious Metals | | |----------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|----------------| | 0.9867, 0.6394 | 0.9775, 0.5829 | 0.9762, 0.5351 | | Newly themes after policy change (Precision, Recall) Animal Products Fabricated Items Human Body Parts 0.6675, 0.5218 1.000, 0.2720 0.3334, 0.2113 | | | Table 3: Average performance of QnA-CM in terms of R@95P compared to the baseline binary classifiers. Performance of QnA-CM Compared to Binary Classifiers: We can observe from Table 3 (detailed in Table 12) that for both the policies, QnA-CM is achieving a better performance on average over the four backbone LLMs than the baseline binary classifiers in terms of R@95P. The performance improvement is more apparent in the case of simulated policy changes in CS, indicating better adaptability of QnA-CM to policy updates. Further, in the training with full data, the better performance of QnA-CM attests to the usefulness of theme-specific knowledge prompted through the questions. Table 4: Theme wise Precision and Recall at 95% policylevel Precision for QnA-CM using BERT-Large. Explainability of QnA-CM: In Table 4 we report the performance at 95% policy-level precision for three highly prevalent themes after training QnACM with full data. The Table 4 also lists down the metrics at 95% policy-level precision for three themes newly introduced through policy change. This theme-level performance provided by QnACM allows finer monitoring and tuning. For example, we can prioritize data collection to improve recall for Human Body Parts while increasing the respective threshold can provide better precision. Table 5: Ablation study for the QnA-CM learning strategy using BERT-Large. Results are in terms of R@95P. Strategies S1, S2, and S3 are detailed in in Section 3. Ablation Study: To understand how the proposed training strategy of QnA-CM aids in learning we perform an ablation study using the D1 English Comments datasets labeled with CS policy. In Table 5 we see that the performance of QnA-CM greatly improves as hard "No" answering questions are added on top of the randomly selected ones (i.e. S1+S2+S3 as in Section 3). QnA-CM further benefits, especially in a policy update situation, when the policy-level question for the "Common Intent" is additionally used during training. Table 6: Performance of QnA-CM compared to the baseline on D1 multilingual comments dataset. Multilingual Inference on D1 Comments Dataset: We fine-tune two multilingual LLMs for QnA-CM using the D1 English comments training dataset. However, we infer on the samples from the three other languages along with the English test set. We keep the questions in English, to validate how well QnA-CM can adapt to multilingual content without explicit multilingual fine tuning. We see from Table 6 that QnA-CM elevates the performance of the models compared to the baselines indicating the usefulness of prompt-based learning through questions even for bilingual inputs. Importance of Prompted Learning: We showcase benefits of our prompt based learning framework QnA-CM on Hate Speech detection. In the real world industry setting, Hate Speech detection is often plagued with false positives that arise due to | Strategy | Full Data | Policy Change | |-----------------------------|-------------|-----------------| | (CS) | (CS) | | | With S3 | 19.38 | 8.65 | | With S1+S2+S3 | 35.47 | 22.96 | | With S1+S2+S3+Common Intent | 40.98 | 28.77 | | Algorithm | R@95P | |-------------------------------------------|---------| | Baseline with BERT-Base-Multilingual | 8.71 | | Baseline with XLM-RoBERTa-Base | 16.95 | | QnA-CM with BERT-Base-Multilingual (Ours) | 27.62 | | QnA-CM with XLM-RoBERTa-Base | 36.57 | binary classifiers getting confused between "hateful" content and "hate speech". This distinction is important for fair assessment of the content severity and user regulation. Therefore, understanding of inherent traits along with hateful intent becomes crucial, which can be achieved via questions in QnA-CM. We demonstrate this setting by two experiments. (1) At 95% precision level we compare the average theme-specific recall over four backbone LLMs for each of the 6 inherent traits for the D2 Posts dataset labeled against the Hate Speech policy. From Table 7 (full results in Table 13) we can see that QnA-CM performs better in all cases irrespective of the imbalance thus validating that the questions are providing useful information to the model. (2) We take 116 English posts from social media that annotators marked as hateful but do not target any inherent traits thus are not Hate Speech. The Table 7 (and Table 14 in Appendix) shows QnA-CM achieving disentanglement among the "hateful" and inherent traits thus offering a lower false positive rate for Hate Speech. Table 7: The importance of prompted learning through questions for Hate Speech in QnA-CM. | (1) Recall for each inherent traits of Hate Speech at 95% Precision Inherent Traits QnA-CM (Ours) Baseline Affiliation, Immigration Status 88.18 80.90 Ethnicity, National Origin, Religious Race 92.29 82.69 Sex, Gender Identity 75.00 69.23 Sexual Orientation 75.00 71.65 Caste 100 100 Disability Status 100 25 (2) False Positive Rate for 116 Hateful but not Hate Speech Posts. Performance Metric QnA-CM (Ours) Baseline FPR at 95% Hate Speech Precision 0.40 0.71 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## 5 Conclusion In this work, we model content moderation as a binary question answering problem where the questions act like prompts, validating various themes belonging to a content policy. This further allows faster adaptation to policy updates by leveraging zero-shot capabilities of pre-trained transformers. Our experiments show 7% absolute improvement in recall over the binary classification setting. In case of policy updates we achieve 22% absolute recall improvement as well, without any additional training. Furthermore we show improved recall on multilingual data with QnA-CM fine-tuned only on English. We also show improved recall and reduced false positives for Hate Speech using QnA- CM. All these facilitate an agile response to policy updates by prompt injection thus limiting member exposure to spam. In the future, we aim to investigate the applicability of open source datasets with recently developed large generative models with high natural language understanding and promptdriven zero-shot capabilities such as GPT series (Brown et al., 2020) or LLaMA (Touvron et al., 2023) in the QnA-CM framework. ## References Swati Agarwal and Ashish Sureka. 2017. Characterizing linguistic attributes for automatic classification of intent based racist/radicalized posts on tumblr microblogging website. *arXiv preprint arXiv:1701.04931*. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. *Advances in neural information processing* systems, 33:1877–1901. Tommaso Caselli, Valerio Basile, Jelena Mitrovic, and ´ Michael Granitzer. 2020. Hatebert: Retraining bert for abusive language detection in english. *arXiv* preprint arXiv:2010.12472. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In *Proceedings of* the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785– 794. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):1–30. Antigoni Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media. language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. João Gama, Indre Žliobait ˙ e, Albert Bifet, Mykola Pech- ˙ enizkiy, and Abdelhamid Bouchachia. 2014. A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1–37. Janis Goldzycher and Gerold Schneider. 2022. Hypothesis engineering for zero-shot hate speech detection. In *Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)*, pages 75– 90, Gyeongju, Republic of Korea. Association for Computational Linguistics. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654. Johan Hovold. 2006. Naive Bayes spam filtering using word-position-based attributes and length-sensitive classification thresholds. In Proceedings of the 15th Nordic Conference of Computational Linguistics (NODALIDA 2005), pages 78–87, Joensuu, Finland. University of Joensuu, Finland. Prashant Kapil and Asif Ekbal. 2020. A deep neural network based multi-task learning approach to hate speech detection. *Knowledge-Based Systems*, 210:106458. Soha Khazaeli, Janardhana Punuru, Chad Morris, Sanjay Sharma, Bert Staub, Michael Cole, Sunny ChiuWebster, and Dhruv Sakalley. 2021. A free format legal question answering system. In *Proceedings of* the Natural Legal Language Processing Workshop 2021, pages 107–113, Punta Cana, Dominican Republic. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In *Proceedings of the* 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Neema Kotonya, Andreas Vlachos, Majid Yazdani, Lambert Mathias, and Marzieh Saeidi. 2022. Policy compliance detection via expression tree inference. arXiv preprint arXiv:2205.12259. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PloS one, 14(8):e0221152. Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2022. Ethos: a multi-label hate speech detection dataset. Complex & Intelligent Systems, pages 1–16. Gianluca Moro and Luca Ragazzi. 2022. Semantic self-segmentation for abstractive summarization of long legal documents in low-resource regimes. In Proceedings of the Thirty-Six AAAI Conference on Artificial Intelligence, Virtual, volume 22. Sankha Subhra Mullick, Shounak Datta, Sourish Gunesh Dhekane, and Swagatam Das. 2020. Appropriateness of performance indices for imbalanced data classification: An analysis. *Pattern Recognition*, 102:107197. Usman Naseem, Imran Razzak, and Peter W Eklund. 2021. A survey of pre-processing techniques to improve short-text quality: a case study on hate speech detection on twitter. *Multimedia Tools and Applications*, 80(28):35239–35266. Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 907–914. John Pavlopoulos, Prodromos Malakasiotis, and Ion Androutsopoulos. 2017. Deeper attention to abusive user content moderation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1125–1135, Copenhagen, Denmark. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* on empirical methods in natural language processing (EMNLP), pages 1532–1543. Flor Miriam Plaza-Del-Arco, M Dolores MolinaGonzález, L Alfonso Ureña-López, and María Teresa Martín-Valdivia. 2021. A multi-task learning approach to hate speech detection leveraging sentiment analysis. *IEEE Access*, 9:112478–112489. Abhilasha Ravichander, Alan W Black, Shomir Wilson, Thomas Norton, and Norman Sadeh. 2019. Question answering for privacy policies: Combining computational and legal perspectives. *arXiv preprint* arXiv:1911.00841. Mohammed Saeed, Naser Ahmadi, Preslav Nakov, and Paolo Papotti. 2021. Rulebert: Teaching soft rules to pre-trained language models. arXiv preprint arXiv:2109.13006. Marzieh Saeidi, Majid Yazdani, and Andreas Vlachos. 2021a. Cross-policy compliance detection via question answering. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 8622–8632, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Marzieh Saeidi, Majid Yazdani, and Andreas Vlachos. 2021b. Cross-policy compliance detection via question answering. *arXiv preprint arXiv:2109.03731*. Georgios Sakkis, Ion Androutsopoulos, Georgios Paliouras, Vangelis Karkaletsis, Constantine D. Spyropoulos, and Panagiotis Stamatopoulos. 2001. Stacking classifiers for anti-spam filtering of E-mail. In *Proceedings of the 2001 Conference on Empirical* Methods in Natural Language Processing. Ravindra Singh and Naurang Singh Mangat. 1996. Stratified Sampling, pages 102–144. Springer Netherlands, Dordrecht. Fei Tan, Yifan Hu, Changwei Hu, Keqian Li, and Kevin Yen. 2020. TNT: Text normalization based pretraining of transformers for content moderation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4735–4741, Online. Association for Computational Linguistics. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Moshe Uzan and Yaakov HaCohen-Kerner. 2021. Detecting hate speech spreaders on twitter using lstm and bert in english and spanish. In *CLEF (Working* Notes), pages 2178–2185. Keisuke Yamazaki, Motoaki Kawanabe, Sumio Watanabe, Masashi Sugiyama, and Klaus-Robert Müller. 2007. Asymptotic bayesian generalization error when training and test distributions are different. In Proceedings of the 24th international conference on Machine learning, pages 1079–1086. ![8_image_0.png](8_image_0.png) ## A Appendix A.1 Model Architecture A.2 Policies Curated For The Experiments Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161. The model architecture used for QnA-CM is illustrated in the following Figure 4. Figure 4: The QnA-CM architecture takes a question and content pair as input while using the output corresponding to the [CLS] token to map to the answer. We have generated a CS policy for our experiments that marks a content as Spam if it comes with a primary common intent of "promoting, facilitating access to, distributing, or attempting to sell" any of the 17 types of illegal or regulated products or services (that correspond to 17 themes). Essentially Q contains one question for the common intent and 17 others covering the individual themes. The decision tree D is similar to Figure 2 where the themes are aggregated by logical OR and combined with the policy intent with a logical AND. Moreover, the inference directly follows from Figure 3. We formulate this CS policy by taking inspiration from the community policies publicly circulated by LinkedIn, Facebook (see footnote 2) and Twitter (https://business.twitter.com/en/help/ ads-policies.html). Note that, this CS policy is not an exact copy of any of the three social media but rather a selective amalgamation, with additions such as Cryptocurrency, Occult, etc and removal of themes like unauthorized sale of digital media, adult contents etc. For the Hate Speech policy we have considered the traditional sense i.e. hateful content targeting inherent traits of person or group. Again this is inspired by the LinkedIn Hateful and Derogatory Content Policy (https://www.linkedin. com/help/linkedin/answer/a1339812), Facebook Hate Speech Policy (https: //transparency.fb.com/en-gb/policies/ community-standards/hate-speech/), and Twitter policy on Hateful Conduct (https:// help.twitter.com/en/rules-and-policies/ hateful-conduct-policy). However, like the CS policy the Hate Speech policy curated by us does not directly follow any of the social media in particular rather it combines the essence of the three, focussing on some key inherent traits only. Moreover, we have observed that even highly capable LLMs like BART (Lewis et al., 2019) often get confused with particularly identifying fine grained inherent traits. For example, it is a complicated problem for the LLMs to disambiguate between Ethnicity, National Origin, Religious Affiliation, and Immigration Status as they often occur together and can even be interpreted synonymously in the same content. Similarly it has been found that LLMs are not powerful enough to properly distinguish between gender and sex as different inherent traits. Hence, while creating the question set as described in Figure 2 we joined these fine grained inherent traits together. One key challenge for QnA-CM is to formulate questions from the policy definition as the same statement can be rephrased in multiple ways. To remove this possible ambiguity during training we left the question design to the unanimous decision by a team of 5 reviewers who are well acquainted with the two curated policies. We felt this is a reasonable approach as this directly reflects the human reviewers' understanding of a content policy and best aids the mimicking of that in QnA-CM. ## A.3 D1 Comments And D2 Posts Datasets For both the D1 Comments and D2 Posts dataset, each content is labeled against the respective policy by two annotators and conflicts are resolved through a third opinion. We have used a group of 5 annotators who are all trained on the two curated policies. The D1 Comments dataset has three primary features to replicate real world scenarios. (1) The distribution of examples over the languages is long tailed i.e. there is imbalance in the dataset across languages (2) The distribution of the samples over the themes is also long tailed for each of the four languages. (3) There is an imbalance between the number of Spam and Clear instances. We achieve this by using a couple of multinomial distributions respectively with distinct language and ![9_image_0.png](9_image_0.png) theme selection probabilities along with a biased Bernoulli distribution. The final data distribution over the language, content labels, and themes is documented in Figure 5. For the D2 Posts dataset we have applied a strategy similar to the one used for Comments. Here the dataset contains only the text part of the English posts. Similar to comments here also we aim to maintain two key features to mimic real-life scenarios. (1) The distribution of the samples over the 6 inherent themes is long tailed. (2) The number of Spam examples is less than that of the Clear ones. Thus, we use a multinomial distribution with different probabilities for each theme and a Bernoulli distribution biased to the Clear contents to sample our Posts dataset. We partition the D1 English Comments and D2 Posts datasets intro training, validation, and test sets by theme level stratified sampling (Singh and Mangat, 1996) The distributions of samples over the three sets for these two datasets are described in the following Table 10. ## A.4 Model Details For Qna-Cm In this study we employ the general purpose BERT-Large (Devlin et al., 2018), RoBERTa-Large (Liu et al., 2019), Albert-Large-V2 (Lan et al., Table 8: The CS policy curated for our experiments has a total of 17 themes along with a top-level policy-specific question such that common knowledge can be shared across intra-policy themes. | Top-level policy concept | Question | |----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Commercial Spam policy | Does the text mention promoting, facilitating access to, distributing, or attempting to sell, illegal or regulated goods or | | Common Intent | services? | | Theme | Question | | Cryptocurrency | Does the text mention about promotion or investment of cryptocurrencies, NFTs, or forex trading? | | Occult | Does the text mention dream interpretation, individual horoscope, spell craft, black magic, love spells or witchcraft? | | Pharmaceuticals | Does the text mention about prescription drugs, ingestible supplyments, weight loss products, vitamins, sexual enhancement drugs, herbal medication, steroids, face creams, medical devices to diagnose, cure or treat a disease? | | Precious Metals | Does the text mention the purchase or sale of gold, diamond, platinum or fuel? | | Recreational Drugs | Does the text mention Cannabis and its components such as CBD? | | Gambling | Does the text mention betting, online real money, casinos, poker, bingo, gambling or promotes gambling? | | Lottery | Does the text mention about lotteries, sweepstakes, and surveys for free goods? | | Circumvention | Does the text mention hacking resources or circumventing to get free access to video games, software, websites, bots to scrape data or artificially inflate data? | | Alcohol and Tobacco | Does the text mention alcohol, tobacco, rolling paper, hookah or electronic cigarettes? | | Animal Products | Does the text mention fur, skin, ivory, bones, horns, carcasses, and the sale of raw meat for consumption? | | Fabricated Items | Does the text mention about fabricated educational certificated, scraped data, proxy test taking or instructions to create forged documents? | | Illegal Drugs | Does the text mention illegal drugs like opioid, cocaine, meth, heroin, opium, MDMA, GHB, LSD or amphetamines? | | Human Body Parts | Does the text mention organs, blood, urine or for any organ donors? | | Mail Order Brides | Does the text mention a catalog of women for men to select for marriage? | | Weapons | Does the text mention weapons, firearms, or violent products or services? | | Counterfeit Items | Does the text advertise non genuine items as genuine or replica of real items such as rolex watches, pirated software? | | Human Exploitations | Does the text mention about extortion, sextortion, sex trafficking or human trafficking? | Table 9: Distribution of themes over the D2 English Posts dataset. | Inherent Traits | Percentage of Samples | |----------------------|-------------------------| | Ethnicity, National Origin, Religious Affiliation, Immigration Status | 62.60 | | Race | 12.19 | | Sex, Gender Identity | 15.40 | | Sexual Orientation | 9.30 | | Caste | 0.23 | | Disability Status | 0.17 | | Dataset | Split | Spam | Clear | |-----------|------------|--------|---------| | D1 | Training | 5828 | 39871 | | English | Validation | 698 | 4603 | | Comments | Test | 815 | 5316 | | D2 | Training | 1376 | 7870 | | English | Validation | 182 | 974 | | Posts | Test | 181 | 975 | Table 10: Distribution of samples over training, validation, and test sets for the two LinkedIn datasets. 2019), and DeBERTa-Large (He et al., 2020), along with multilingual models such as BERTBase-Multilingual (Devlin et al., 2018) and XLMRoBERTa-Base (Conneau et al., 2019), as the backbone LLM for both binary classifier and QnA-CM. All the LLM backbones used in QnA-CM are finetuned using the dataset under concern for a maximum number of 10 epochs with a learning rate of 1.00e-05 for the Adam (Kingma and Ba, 2014) optimizer. We measure the performance on the validation set after every 100 steps. An early termination criterion is used to check if the performance evaluation metrics such as Accuracy, precision, recall, and F1 score have not improved on the validation set for the last e consecutive evaluation steps. For the CS policy e is set to 10 and the same for Hate Speech is kept to 5, as those choices found to be performing well on average. The hyperparameters introduced in QnA-CM are tuned using grid search. The search space and the final choices for these hyperparameters for the D1 Comments and the D2 Posts datasets are detailed in Table 11. Table 11: Hyperparameter tuning in QnA-CM. ## A.5 Policy Update Simulation For Cs Policy To simulate an update for the CS policy, the low prevalent eight themes in the English dataset, namely Animals, Fabricated Items, Illegal Drug, Human Body Parts, Mail Order Bride, Weapons, Counterfeit Items, and Human Exploitation are removed from the training and validation set and added to the test set. Further, the corresponding validating questions for these eight themes are only used during inference. This is a viable strategy for simulating policy update as it is likely that the ex- | Name | D1 | D2 | Search | |----------------------------------------------------------------------------------------------------------------------------|-------|-------|------------------------------| | Comments | Posts | Space | | | νs | 1 | 0.5 | {0.5, 1}. | | ω | 0.2 | 0.3 | {0.2, 0.3} 1 . | | νh | 0.6 | 0.2 | {0.2, 0.4, 0.6, 0.8}. | | νn | 0.3 | 1 | {0.1, 0.3, 0.6, 0.9, 1}. | | ν+ | 0.3 | 0.3 | {0.1, 0.3, 0.5}. | | ν− | 0.1 | 0.1 | {0.05, 0.1, 0.15, 0.2}. | | θ1, θ2 | - | - | {0.501, 0.502, · · · 0.99} 2 | | 1 The threshold is set to be the same for all themes. 2 Model dependent, varied to optimize metric as per common practice. | | | | isting themes will have enough annotated samples while the same for the newly introduced themes will be less in number. | LLM | Full Data | Policy Change | Full Data | |--------------------------|-------------|-----------------|-------------| | (CS) | (CS) | (Hate Speech) | | | Baseline BERT-Large | 40.12 | 9.71 | 81.00 | | RoBERTa-Large | 43.25 | 13.52 | 75.41 | | ALBERT-Large-V2 | 34.38 | 10.75 | 77.09 | | DeBERTa-Large | 47.72 | 11.57 | 81.56 | | QnA-CM (Ours) BERT-Large | 40.98 | 28.77 | 87.15 | | RoBERTa-Large | 45.83 | 23.34 | 87.15 | | ALBERT-Large-V2 | 50.42 | 12.97 | 81.05 | | DeBERTa-Large | 55.48 | 69.66 | 88.26 | Table 12: Performance of QnA-CM in terms of R@95P compared to the baseline binary classifiers. ## A.6 Full Results The complete results for Table 3 is available in Table 12 while the same for Table 7 is reported in Table 13. Moreover, the scores of QnA-CM and Baseline for two examples that are hateful but not Hate Speech are detailed in Table 14. Table 13: Theme-specific Recall comparison of QnA-CM and Baseline on the D2 Posts dataset at 95% Policy-level Precision for each of the 6 inherent traits in Hate Speech policy. | LLM | Inherent Traits | QnA-CM (Ours) | Baseline | |-----------------------------------------------------------------------|-----------------------------------------------------------------------|-----------------|------------| | Ethnicity, National Origin, Religious Affiliation, Immigration Status | 90.90 | 81.81 | | | Race | 96.10 | 88.46 | | | Sex, Gender Identity | 69.23 | 69.23 | | | Sexual Orientation | 73.33 | 86.66 | | | Caste | 100 | 100 | | | Disability Status | 100 | 25 | | | BERT-Large | Ethnicity, National Origin, Religious Affiliation, Immigration Status | 89.09 | 76.36 | | Race | 80.76 | 76.92 | | | Sex, Gender Identity | 88.46 | 76.92 | | | Sexual Orientation | 80.00 | 66.60 | | | Caste | 100 | 100 | | | Disability Status | 100 | 0 | | | RoBERTa-Large | Ethnicity, National Origin, Religious Affiliation, Immigration Status | 84.54 | 81.81 | | Race | 92.30 | 80.76 | | | Sex, Gender Identity | 65.38 | 65.38 | | | Sexual Orientation | 60.00 | 53.33 | | | Caste | 100 | 100 | | | Disability Status | 100 | 0 | | | ALBERT-Large-V2 | Ethnicity, National Origin, Religious Affiliation, Immigration Status | 88.18 | 83.63 | | Race | 100 | 84.61 | | | Sex, Gender Identity | 76.92 | 65.38 | | | Sexual Orientation | 86.66 | 80.00 | | | Caste | 100 | 100 | | | Disability Status | 100 | 100 | | | DeBERTa-Large | | | | Table 14: Scores and decisions of BERT-Large based QnA-CM and Baseline for two hateful but not Hate Speech examples (thresholds are set to 95% policy-level Precision performance on Hate Speech). **Trigger Warning:** The examples contain abusive language and hateful sentiment. | Example 1: he's also a fraud don't believe them they are begger's they are doing from of such amount and employee are paying from ther own such a <slur> begger's. Algorithm Scores Decision Hateful 0.95 Ethnicity, National Origin, Religious Affiliation, Immigration Status 0.76 Race 0.06 Sex, Gender Identity 0.01 Non-Hate Speech Sexual Orientation 0.02 Caste 0.00 Disability Status 0.00 Baseline 0.97 Hate Speech Example 2: This guy is a swindler and takes advantage of ur daughter doesn't care abt her age .. QnA-CM (Ours) Hateful 0.92 Ethnicity, National Origin, Religious Affiliation, Immigration Status 0.17 Race 0.03 Sex, Gender Identity 0.21 Non-Hate Speech Sexual Orientation 0.05 Caste 0.02 Disability Status 0.01 Baseline 0.98 Hate Speech Comment: The Baseline classifier is not being able to distinguish between "hateful" and Hate Speech. In case of QnA-CM, we can explicitly question the learner about the inherent traits and check if any of them responses Yes alongside "hateful" to mark as Hate Speech. As we can see from the scores for QnA-CM that none of the inherent traits response Yes thus we are being able to correctly classify the contents as Non-hate Speech (even if it is "hateful" as per the high score for that question only). This way the prompts aid QnA-CM to effectively learn the individual themes and achieve disentanglement between the distinct concepts. QnA-CM (Ours) | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
wang-etal-2023-weighted
Weighted Contrastive Learning With False Negative Control to Help Long-tailed Product Classification
https://aclanthology.org/2023.acl-industry.55
Item categorization (IC) aims to classify product descriptions into leaf nodes in a categorical taxonomy, which is a key technology used in a wide range of applications. Along with the fact that most datasets often has a long-tailed distribution, classification performances on tail labels tend to be poor due to scarce supervision, causing many issues in real-life applications. To address IC task{'}s long-tail issue, K-positive contrastive loss (KCL) is proposed on image classification task and can be applied on the IC task when using text-based contrastive learning, e.g., SimCSE. However, one shortcoming of using KCL has been neglected in previous research: false negative (FN) instances may harm the KCL{'}s representation learning. To address the FN issue in the KCL, we proposed to re-weight the positive pairs in the KCL loss with a regularization that the sum of weights should be constrained to K+1 as close as possible. After controlling FN instances with the proposed method, IC performance has been further improved and is superior to other LT-addressing methods.
# Weighted Contrastive Learning With False Negative Control To Help Long-Tailed Product Classification Tianqi Wang University of Buffalo Buffalo, NY twang47@buffalo.edu Lei Chen Rakuten Institute of Technology Boston, MA lei.a.chen@rakuten.com Xiaodan Zhu Queen's University Kingston, ON, Canada xiaodan.zhu@queensu.edu Younghun Lee and **Jing Gao** Purdue University, West Lafayette, IN {younghun,jinggao}@purdue.edu ## Abstract Item categorization (IC) aims to classify product descriptions into leaf nodes in a categorical taxonomy, which is a key technology used in a wide range of applications. Along with the fact that most datasets often has a long-tailed distribution, classification performances on tail labels tend to be poor due to scarce supervision, causing many issues in real-life applications. To address IC task's long-tail issue, K-positive contrastive loss (KCL) is proposed on image classification task and can be applied on the IC task when using text-based contrastive learning, e.g., SimCSE. However, one shortcoming of using KCL has been neglected in previous research: false negative (FN) instances may harm the KCL's representation learning. To address the FN issue in the KCL, we proposed to reweight the positive pairs in the KCL loss with a regularization that the sum of weights should be constrained to K+1 as close as possible. After controlling FN instances with the proposed method, IC performance has been further improved and is superior to other LT-addressing methods. ## 1 Introduction Item categorization (IC) aims to classify a product into a node of a taxonomy hierarchy. The textual descriptions of the products are used as the input and thus the task can be formulated as a text classification problem. IC is a fundamental task in e-commerce and the base for many applications such as personal recommendation and query understanding. One of the major challenges in building a highly effective real-life IC system is the serious long-tailed (LT) problem—A few head classes have the majority of the product items, while each of the remaining (large number of) tail classes contains only a few items. Consequently, the scarce supervision available for these tail classes tends to cause unsatisfactory classification performance. In the most recent years, several novel LT-addressing methods, e.g., methods utilizing selfsupervision (Yang and Xu, 2020) and contrastive learning (CL) (Kang et al., 2021), have emerged in computer vision. However, the related research on natural language processing (NLP) tasks is still limited. However, when utilizing unsupervised contrastive learning, e.g., *K-positive contrastive loss* (KCL) (Kang et al., 2021), the issue of appearing False Negative (FN) samples hurts model learning. Figure 1 shows an example of the FN issue and the performance impact reported in (Huynh et al., 2020). In this paper, to build IC models and use KCL to solve the LT issue, we propose a novel method to control the FN issue, which entails assigning different weights for each positive sample in the KCL loss and tries to keep the sum of these weights equal to a predefined value. The experimental results on the three Amazon product category datasets show that the proposed contrastive learning methods help on improving the model performance on tail classes and the FN controlling can further improve CL-based LT-addressing method. Our main contributions can be summarized as: Figure 1: False Negative (FN) samples may appear ![0_image_0.png](0_image_0.png) when applying contrastive learning. Addressing the FN issue can improve the learned model's downstream performance. Result is from the Table 2 in (Huynh et al., 2020) regarding image classification on the ImageNet data using the model learned by SimCLR (Chen et al., 2020) 574 - We recognize the false negative sample issue in K-positive contrastive loss. - We propose a novel false negative controlling method to mitigate the its negative impact and show the effectiveness of proposed model. - To the best of our knowledge, we are the first to apply contrastive learning to address the LT challenge in the IC text classification. ## 2 Related Work Many methods have been proposed to address the LT issue. One category of those methods resamples the data to balance the label distribution, e.g., SMOTE (Chawla et al., 2002). Another category of methods assign different weights to samples based on their label frequencies, e.g., Focal loss (Lin et al., 2017) Class-balanced loss (Cui et al., 2019), Label-Distribution-Aware Margin loss (LDAM) (Cao et al., 2019) and so on. Some of those loss-balanced methods are also applied to the NLP domain in (Huang et al., 2021). In addition, few shot learning (Liu et al., 2019) and transfer learning (Xiao et al., 2021) methods are also proposed for long tail classification. Recently, a *two-stage* training strategy (exampled in (Kang et al., 2019; Zhou et al., 2020)), which decouples the learning a feature encoder and the learning of a classifier, has become influential in computer vision and shows its superior performance on addressing the LT issue. Contrastive learning (CL) has been found to be effective in providing high-quality encoders in a simple self-learning fashion. The CL-based text representation learning has become a hot research topic in NLP. SimCSE (Gao et al., 2021) uses dropout operations to be an effective text augmentation and can learn effective text representations. In the LT-addressing two-stage method, selflearning which discards the influence of label distribution has been used in its representation learning stage, e.g., (Yang and Xu, 2020; Kang et al., 2021). Besides simply using self-supervision, including the supervision signal from existing labels can improve the representation learning (Khosla et al., 2020). However, introducing semantics information may suffer from the LT issue and hurt the performance of tail classes. To address this issue, K-positive contrastive loss (Kang et al., 2021) is proposed to learn balanced feature representations. An instance is called *false negative* (FN), if any in-batch negative instance shares the label carried by the anchor sample. FN samples are found to be harmful to CL methods and corresponding mitigation methods are proposed (Huynh et al., 2020; Chen et al., 2021). ## 3 Methodology Let x denote the title of a product and y its category label. Then IC can be formulated as a text classification problem: given a product title x, we a model to predict the class label y, where h and h + are the representations of the anchor sample x and its corresponding positive sample x +, respectively. H− is the representations of negative sample set X− of the given the anchor sample x and positive sample x +. h− ∈ H− is the representation of the negative sample x− in X− where X− is the negative sample set given the anchor sample x and its positive sample x +. τ is a temperature hyper-parameter and sim(·, ·) denotes the cosine similarity of the two vectors. ## 3.1 Recap Of Kcl The KCL is a state-of-the-art model that learns balanced feature representations for long-tailed label distribution. It defines a positive sample set by sampling K samples belonging to the same class as the anchor if such samples are more than K in existing mini-batch. The KCL can be calculated by the Eq. 1. $$\mathcal{L}_{KCL}=\frac{1}{(K+1)}\sum_{h\in H}\sum_{h^{+}\in\{h^{\prime}\}\cup H_{K}^{+}}\mathcal{L}(h,h^{+})$$ $$\mathcal{L}(h,h^{+})=-log\frac{e^{sim(h,h^{+})/\tau}}{\sum_{h_{i}\in H-\{h\}}e^{sim(h,h_{i})/\tau}}\tag{1}$$ where h is the anchor sample representation and h′ the self-augmented representation of h. H + K represents the representation set of sampled K positive samples from the batch. H denotes the samples in the same batch. K is the hyper-parameter representing the defined number of positive pairs. In NLP, to generate h′, we propose to use the SimCSE (Gao et al., 2021) method. In particular, an anchor sample x is encoded using a BERT (Devlin et al., 2018) model with varying dropout masks. The encoded representations can be represented as: $$\begin{array}{l}{{h=t a n h(M L P(B E R T(x,z)))}}\\ {{h^{\prime}=t a n h(M L P(B E R T(x,z^{\prime})))}}\end{array}\quad\mathrm{(2)}$$ where BERT(*x, z*) denotes the BERT encoder using a random dropout mask. For MLP, we use a layer of fully connected network with the *tanh* activation, while z and z′are two different random dropout masks in BERT at rate of 0.1. ![2_image_0.png](2_image_0.png) Fig. 2 illustrate how the KCL represented in Eq. 1 is applied on a mini-batch with five samples. Since P2 and P0 share the same label, their encoded representations are expected to be close. However, when running KCL with K = 1, P2 will be wrongly pushed away from P0 being a FN sample. ## 3.2 False Negative Control As shown in Fig 2, a significant drawback of KCL is that some positive samples can be treated as negative if there are more than K + 1 samples belonging to the same class in a batch. The occurrence of the false negative instances will degrade the quality of the learned representations and further hurt the classification performance. To alleviate the false negative, we propose a novel method named false negagive control (FNC), which assigns varying weights to positive samples in the KCL loss based on the embeddings of the anchor sample and the positive samples, represented as: $$\mathrm{{}^{\mathrm{}}{L}P}(h\oplus h^{+}))$$ ## Wh,H+ = Relu(Mlp(H ⊕ H +)) (3) where wh,h+ is the weight for the positive sample h + with respect to the anchor sample h. ⊕ is the vector concatenation operation. With the learned weights, we propose the weighted contrastive loss by re-weighting the positive samples in the original InfoNCE loss, which can be defined as: $$L_{w}(h,h^{+})=-l o g$$ $$\Sigma_{w}(h,h^{+})=-\iota o g$$ $$e^{s i m(h,h^{+})/\tau}$$ $$\overline{\Sigma_{h_{i}^{+}\in H^{+}}\ w_{i}e^{s i m(h,h_{i}^{+})/\tau}+\Sigma_{h_{i}^{-}\in H^{-}}\ e^{s i m(h,h_{i}^{-})/\tau}}\tag{4}$$ To satisfy the property of balancing the feature space for classes with different frequencies in KCL while controlling the FN issue at the same time, we propose the KCL-FNC loss with the aforementioned defined weighted contrastive loss with the constraint that the summation of these weights is as close to a predefined value (K + 1) as possible. The KCL-FNC loss is defined as: $$\begin{array}{c}{{{\mathcal L}_{K C L-F N C}=\sum_{h\in H\ h^{+}\in\{h^{\prime}\}\cup H_{K}^{+}}}}\\ {{{\mathcal L}_{w}(h,h^{+})+\lambda{\mathcal L}_{r e g}(h,H^{+})}}\\ {{{(K+1)}}}\end{array}$$ $$\mathbf{(5)}$$ where λ is the balanced parameter and Lreg(*h, H*+) is the regularization loss denoted as: $${\mathcal{L}}_{r e g}(h,H^{+})=|\sum_{h^{+}\in H^{+}}w_{h,h^{+}}-K-1|\quad(6)$$ The advantages of the proposed KCL-FNC loss over exiting FN controlling methods are two-folds: (1) learning balanced feature representations and (2) applying as much available information as possible. The attraction strategy (FNA) in (Huynh et al., 2020; Chen et al., 2021), which include all positive samples rather that K sampled positive samples, makes the KCL roll back to the supervised contrastive loss when the ground-truth labels are known, and therefore destroys the KCL's property of learning balanced representations. The elimination (FNE) strategy, which ignores the FN samples in calculating the contrastive loss, loses valuable information and further degrades the representationlearning performance, especially the number of such instances are large, such as for head labels in a imbalanced data set. ## 4 Experiments Datasets. The experiments are performed on the public Amazon dataset (McAuley et al., 2015; He and McAuley, 2016) which is a widely used benchmark. Following (Tayal et al., 2020), we use three categories of Amazon product datasets: Automotive, Beauty, and Electronics. Each sample in the | Automotive | Electronics | Beauty | AutoH | AutoM | AutoT | | | | | |--------------|---------------|----------|---------|---------|---------|-------|-------|-------|-------| | F1w | F1m↑ | F1w | F1m↑ | F1w | F1m ↑ | F1m↑ | F1m↑ | F1m↑ | | | BERT-CE | 78.03 | 63.95 | 67.68 | 52.94 | 71.44 | 56.64 | 75.42 | 64.51 | 51.78 | | cRT | 77.85 | 63.72 | 67.54 | 52.99 | 71.55 | 55.88 | 75.20 | 63.99 | 51.78 | | SimCSEus | 76.36 | 64.25 | 65.82 | 53.30 | 70.99 | 58.06 | 74.16 | 64.92 | 54.65 | | KCL | 76.87 | 65.17 | 65.18 | 53.39 | 71.44 | 58.26 | 74.99 | 65.06 | 55.36 | | KCL-FNA | 76.54 | 64.65 | 66.08 | 53.69 | 71.65 | 58.31 | 74.46 | 64.88 | 54.53 | | KCL-FNE | 77.96 | 65.82 | 65.73 | 53.67 | 71.43 | 57.95 | 75.97 | 65.78 | 55.61 | | KCL-FNC | 78.05 | 66.20 | 66.24 | 54.02 | 71.84 | 58.56 | 76.02 | 66.50 | 56.07 | Overall Performance. We compare the proposed method with the following models: BERT with cross-entropy loss (BERT-CE), cRT (Kang et al., 2019), unsupervised SimCSE (SimCSEus), and controlling false negative instances with attraction (KCL-FNA) and elimination (KCL-FNE). Note that except for BERT-CE, all other baseline models use the two-stage approach to address the long tail issue, in which the classifier is trained using a balanced data set. The experimental results are shown in Table 1. We can observe that all contrastive learning-based models outperform BERT-CE and cRT in terms of macro-F1, which demonstrates the effectiveness of contrastive learning in addressing the long tail issue in IC. The calculated FN sample rates are 0.036 (Automotive), 0.068 (Electronics) and 0.102 (Beauty), showing that there are significant number of FN samples when using KCL1. When comparing the false negative controlling methods with the KCL, we observed that those false negative controlling methods achieved better performance in terms of macro-F1. The results demonstrate the necessity of controlling the FN issue in KCL. Among those false negative controlling methods, the proposed method outperforms all other methods, showing its advantage over existing methods. Performance on Subsets. To investigate the performance of the models on the classes with different label frequencies, we split the Automotive dataset into three subsets according to the label frequency: Head, Median and Tail and evaluate the models by macro-F1 on the subsets1. The model performance on the subsets are included in Table 1. We can see that the performance decreases along with the decrease of the label frequencies for each single model, illustrating the lacking of samples limits the model performance. Moreover, the methods based on KCL outperforms all other baselines. The proposed FN controlling method achieves the best performance on all subsets which demonstrates the false negative controlling method can help address the long tail issue in IC task without hurting the overall performance. The details can be found in the Appendix. ## 5 Conclusion In large-scaled item categorization tasks, category labels are naturally distributed in a long tail pattern, which challenges the performance on tail classes due to severe supervision missing. To address this challenge, we adopt a two-stage LT-addressing method that was originally proposed in the image classification task. To make this method work on our text classification task, we use the recently proposed SimCSE (Gao et al., 2021) to do an effective text transformation and KCL loss in the representation learning stage. Furthermore, we recognize there are false negative samples caused by using the KCL loss and propose a novel controlling method to reduce the corresponding negative influences. The experimental results prove that the proposed method helps improve the performance on long-tailed data and the false negative controlling can further help boost the performance when using KCL. While we worked on item classification in this paper, we will extend the model to other problems. ## 6 Limitations A major limitation of this research work is only item classification, one specific type of NLU tasks, is used in our experiments. To better evaluate our proposed KCL-FNC method, an expanded testing task set will provide more convincing power. In addition, we only used cross-entropy (CE) loss when training models, in both representation and classifier learning stages. It will be interesting to see the compound effect when applying our proposed method together with some advanced loss types, such as LDAM (Cao et al., 2019). ## References Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. 2019. Learning imbalanced datasets with label-distribution-aware margin loss. arXiv preprint arXiv:1906.07413. Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. *Journal of artificial intelligence research*, 16:321–357. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pages 1597–1607. PMLR. Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, and Ming-Hsuan Yang. 2021. Incremental false negative detection for contrastive learning. *arXiv preprint arXiv:2106.03719*. Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. 2019. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9268–9277. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Empirical Methods in Natural Language Processing (EMNLP)*. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507–517. Yi Huang, Buse Giledereli, Abdullatif Köksal, Arzucan Özgür, and Elif Ozkirimli. 2021. Balancing methods for multi-label text classification with long-tailed class distribution. *arXiv preprint arXiv:2109.04712*. Tri Huynh, Simon Kornblith, Matthew R Walter, Michael Maire, and Maryam Khademi. 2020. Boosting contrastive self-supervised learning with false negative cancellation. arXiv preprint arXiv:2011.11765. Bingyi Kang, Yu Li, Sa Xie, Zehuan Yuan, and Jiashi Feng. 2021. Exploring balanced feature spaces for representation learning. In International Conference on Learning Representations. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. 2019. Decoupling representation and classifier for long-tailed recognition. arXiv preprint arXiv:1910.09217. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. arXiv preprint arXiv:2004.11362. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. 2019. Large-scale long-tailed recognition in an open world. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43–52. Kshitij Tayal, Rahul Ghosh, and Vipin Kumar. 2020. Model-agnostic methods for text classification with inherent noise. In *Proceedings of the 28th International Conference on Computational Linguistics:* Industry Track, pages 202–213. Lin Xiao, Xiangliang Zhang, Liping Jing, Chi Huang, and Mingyang Song. 2021. Does head label help for long-tailed multi-label text classification. *arXiv* preprint arXiv:2101.09704. Yuzhe Yang and Zhi Xu. 2020. Rethinking the value of labels for improving class-imbalanced learning. arXiv preprint arXiv:2006.07529. Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. 2020. Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9719–9728. ## A Dataset Statistics The statistics of the three data sets: Automotive, Beauty and Electronics are shown in Table A. 1 and the histogram of the label frequencies of the three data sets are shown in Figure A. 1. | Labels | Samples | Title Length | | |-------------|-----------|----------------|--------------| | Automotive | 953 | 160,725 | 9.90 ± 5.51 | | Beauty | 229 | 159,805 | 10.26 ± 5.61 | | Electronics | 500 | 86,357 | 14.90 ± 9.56 | Table A. 1: Statistics of Datasets ## B Implementation Details For all the models except for the BERT, we follow the two-stage training protocol in (Kang et al., 2019). The batch size is set to 32 and initial learning rate is 1e − 5 with a linear decay. The datasets are preprocessed following (Tayal et al., 2020). We split the training datasets into two subsets: *train* vs. dev that is used to select hyperparameters and validate the performance 2. The models are evaluated using two metrics: macro-F1 (F1m) and weighted F1 (F1w). Note that macro-F1 is frequently used in evaluating LT-addressing methods. Since it calculates the F1 for each class and averages them, it is significantly influenced by the performance of tail classes. We report the results on the test set using the best models on the dev set measured by macro-F1. ## C False Negative Rate Calculation To calculate the false negative rate, we use the obtained embeddings of SimCSE-KCL in the first stage after 10 epoch and report the average of five runs. We calculate the false negative rate of those three datasets where the batch size is set to 32 and K is set to 1. Following (Chen et al., 2021), we calculate the false negative rate in SimCSE-KCL for the three datasets. The false negative rate fnr is the number of false negative samples among top 25% the most similar samples of the anchor in a batch, which can be represented as: $$f n r={\frac{\sum_{i=1}^{N}\sum_{x_{j}\in B_{i}}m a x(0,|B_{i}^{j}|-(K+1))}{\sum_{i=1}^{N}(0.25\times|B_{i}|\times(|B_{i}|-1))}}$$ N is the number of batches. Biis the set of samples in batch i and |Bi| is the number of samples in 2The code will be available after acceptance. batch i. |B j i| is the number of samples belonging to the same class as xj in the 25% most similar samples with the sample xj . ## D Data Statistics Of The Subsets The classes are sorted based on their frequencies. The Head subset consists of the samples in the most frequent 13 classes and the Tail subset includes the samples belonging to the least frequent 13 classes. The rest samples belonging to the 13 median frequent classes consists of the Median subset. The statistics of the subsets are shown in the Table D. 1. | Automotive | | | | |--------------|----------|---------|--------| | Head | Median | Tail | | | Label | 318 | 318 | 317 | | Sample | 132, 590 | 20, 318 | 7, 817 | ![6_image_0.png](6_image_0.png)
bespalov-etal-2023-towards
Towards Building a Robust Toxicity Predictor
https://aclanthology.org/2023.acl-industry.56
Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, {\textbackslash}texttt{ToxicTrap}, introducing small word-level perturbations to fool SOTA text classifiers to predict toxic text samples as benign. {\textbackslash}texttt{ToxicTrap} exploits greedy based search strategies to enable fast and effective generation of toxic adversarial examples. Two novel goal function designs allow {\textbackslash}texttt{ToxicTrap} to identify weaknesses in both multiclass and multilabel toxic language detectors. Our empirical results show that SOTA toxicity text classifiers are indeed vulnerable to the proposed attacks, attaining over 98{\textbackslash}{\%} attack success rates in multilabel cases. We also show how a vanilla adversarial training and its improved version can help increase robustness of a toxicity detector even against unseen attacks.
# Towards Building A Robust Toxicity Predictor Dmitriy Bespalov, Sourav Bhabesh, Yi Xiang, Liutong Zhou, Yanjun Qi Amazon Web Services AI ML { dbespal, sbhabesh, yxxan, yanjunqi }@amazon.com ## Abstract Recent NLP literature pays little attention to the robustness of toxicity language predictors, while these systems are most likely to be used in adversarial contexts. This paper presents a novel adversarial attack, ToxicTrap, introducing small word-level perturbations to fool SOTA text classifiers to predict toxic text samples as benign. ToxicTrap exploits greedy based search strategies to enable fast and effective generation of toxic adversarial examples. Two novel goal function designs allow ToxicTrap to identify weaknesses in both multiclass and multilabel toxic language detectors. Our empirical results show that SOTA toxicity text classifiers are indeed vulnerable to the proposed attacks, attaining over 98% attack success rates in multilabel cases. We also show how a vanilla adversarial training and its improved version can help increase robustness of a toxicity detector even against unseen attacks. ## 1 **Introduction** Deep learning-based natural language processing (NLP) plays a crucial role in detecting toxic language content (Ibrahim et al., 2018; Zhao et al., 2019; Djuric et al., 2015; Nobata et al., 2016; MacAvaney et al., 2019). Toxic content often includes abusive language, hate speech, profanity or sexual content. Recent methods have mostly leveraged transformer-based pre-trained language models (Devlin et al., 2019; Liu et al., 2019a) and achieved high performance in detecting toxicity (Zampieri et al., 2020). However, directly deploying NLP models could be problematic for real-world toxicity detection. This is because toxicity filtering is mostly needed in security-relevant industries like gaming or social networks where models are constantly being challenged by social engineering and adversarial attacks. In this paper, we study the adversarial robustness of toxicity language predictors1and propose a new 1We use "toxicity detection", "toxicity language detection" 581 ![0_image_0.png](0_image_0.png) Figure 1: ToxicTrap successfully fooled a SOTA toxicity predictor by perturbing one word in the original text using word synonym perturbation. After adversarial training (AT), the improved toxicity predictor can correctly flag the perturbed text into the toxicity class. set of attacks, we call "ToxicTrap ". ToxicTrap generates targeted adversarial examples that fool a target model towards the benign predictions. Our design is motivated by the fact that most toxicity classifiers are being deployed as API services and used for flagging out toxic samples. Figure 1 shows one ToxicTrap adversarial example. The perturbed text replaces one word with its synonym and the resulting phrase fooled the transformer based detector into a failed detection (as "benign"). We propose novel goal functions to guide greedy word importance ranking to iteratively replace each word with small perturbations. Samples generated by ToxicTrap are toxic, and can fool a victim toxicity predictor model to classify them as "benign" and not as any toxicity classes or labels. The proposed ToxicTrap attacks can pinpoint the robustness of both multiclass and multilabel toxicity NLP models. To the authors' best knowledge, this paper is the first work that introduces adversarial attacks 2to fool multilabel NLP tasks. Design multilabel ToxicTrap is challenging since coordinating multiple labels all at once and quantifying the attacking goals is tricky when aiming to multiple targeted ## Labels. Empirically, we use ToxicTrap to evaluate BERT (Devlin et al., 2018) and DistillBERT (Liu et al., 2019b) based modern toxicity text classifiers on the Jigasw (Jig, 2018), and Offensive tweet (Bowman et al., 2015) datasets. We then use adversarial training to make these models more resistant to ToxicTrap adversarial attacks. In *adversarial* training a target model is trained on both original examples and adversarial examples (Goodfellow et al., 2014). We improve the vanilla adversarial training with an ensemble strategy to train with toxic adversarial examples generated from multiple attacks. Our contributions are as follows: - ToxicTrap reveals that SOTA toxicity classifiers are not robust to small adversarial perturbations. - Conduct a thorough set of analysis comparing variations of ToxicTrap designs. - Empirically show that greedy unk search with composite transformation is preferred. - Adversarial training can improve robustness of toxicity detector. ## 2 **Method** Methods generating text adversarial examples introduce small perturbations in the input data checking if a target model's output changes significantly. These adversarial attacks help to identify if an NLP model is susceptible to word replacements, misspellings, or other variations that are commonly found in real-world data. For a given NLP classifier F : *X → Y* and a seed input x, searching for an *adversarial* example x′from x is: $$\begin{array}{c}{{{\bf x}^{\prime}={\cal T}({\bf x},\Delta{\bf x}),\ {\bf x}^{\prime}\in{\cal X}}}\\ {{\mathrm{s.t.}\ \ {\cal G}({\cal F},{\bf x}^{\prime}),\ \ \mathrm{and}\ \ \{C_{i}({\bf x},{\bf x}^{\prime})\}}}\end{array}\qquad(1)$$ Here T (x, ∆x) denotes the transformations that perturb text x to x′. G(F, x′) represents a goal function that defines the purpose of an attack, for instance like flipping the output. {Ci(x, x′)} denotes a set of constraints that filters out undesirable x′to ensure that perturbed x′ preserves the semantics and fluency of the original x. To solve Equation (1), adversarial attack methods design search strategies 3to transform x to x′ via transformation T (x, ∆x), so that x′fools F by achieving the fooling goal G(F, x′), and at the same time fulfilling a set of constraints {Ci(x, x′)}. Therefore designing adversarial attacks focus on 3Because brute force is not feasible considering the length and dictionary size of natural language text. designing four components: (1) goal function, (2) transformation, (3) search strategy, and (4) constraints between seed and its adversarial examples (Morris et al. (2020b)). We propose a suite of ToxicTrap attacks to identify vulnerabilities in SOTA toxicity NLP detectors. ToxicTrap attacks focus on intentionally generating perturbed texts that contain the same highly abusive language as original toxic text, yet receive significantly lower toxicity scores and get predicted as "benign" by a target model. ## 2.1 **Word Transformations In** Toxictrap For T (x, ∆x), many possible word perturbations exist, including embedding based word swap, thesaurus based synonym substitutions, or character substitution (Morris et al., 2020a). Attacks by Synonym Substitution: Our design focuses on transformations that replace words from an input with its synonyms. - (1) Swap words with their N nearest neighbors in the counter-fitted GloVe word embedding space; where N ∈ {5, 20, 50}. - (2) Swap words with those predicted by BERT Masked Language Model (MLM) model. - (3) Swap words with their nearest neighbors in the wordNet. The goal of word synonym replacement is to create examples that can preserve semantics, grammaticality, and non-suspicion. Attacks by Character Transformation: Another group of word transformations is to generate perturbed word via character manipulations. This includes character insertion, deletion, neighboring character swap and/or character substitution by Homoglyph. These transformations change a word into one that a target toxicity detection model doesn't recognize. These character changes are designed to generate character sequences that a human reader could easily correct into those original words. Language semantics are preserved since human readers can easily correct the misspellings. Composite Transformation: We also propose to combine the above transformations to create new composite transformations. For instance, one composite transformation can include both perturbed words from character substitution by Homoglyph and from word swaps using nearest neighbors from GloVe embedding (Pennington et al., 2014). ## 2.2 **Novel Goal Functions For** Toxictrap A goal function G(F, x′) module defines the purpose of an attack. Different kinds of goal functions exist in the literature to define whether an attack is successful in terms of a victim model's outputs. Toxicity language detection has centered on a supervised classification formulation, including binary toxicity detector, multiclass toxicity classification and multilabel toxicity detection which assigns a set of target labels for x. See Section A.2 for more details. As aforementioned, toxicity classifiers are used mostly as API services to filter out toxic text. Therefore, its main vulnerability are those samples that should get detected as toxic, however, fooling detectors into a wrong prediction as "benign". Now let us define G(F, x′) for ToxicTrap attacks. We propose two choices of designs regarding the target model types. Multiclass or Binary Toxicity: When toxicity detector F handles binary or multiclass outputs, we define ToxicTrap attacks' goal function as: $$\mathcal{G}(\mathcal{F},\mathbf{x}^{\prime}):=\{\mathcal{F}(\mathbf{x}^{\prime})=b;\mathcal{F}(\mathbf{x})\neq b\}\tag{2}$$ Here $b$ denotes the "0:benign" class. Multilabel Toxicity: Next, we study how to fool multilabel toxicity predictors. In real-world applications of toxicity identification, an input text may associate with multiple labels, like identity attack, profanity, and hate speech at the same time. The existence of multiple toxic labels at the same time provides more opportunities for attackers, but also poses design challenges 4. Multilabel toxicity detection assigns a set of target labels for x. The output y = {y1, y2*, ..., y*L} is a vector of L binary labels and each yi ∈ {0, 1} (Zhao et al., 2019). For example in the Jigsaw dataset (Jig, 2018), each text sample associates with six binary labels per sample, namely {benign, obscene, identity attack, insult, threat, and sexual explicit}. We introduce a novel goal function for attacking such models as follows: $$\begin{array}{c}{{\mathcal{G}(\mathcal{F},\mathbf{x}^{\prime}):=\{\mathcal{F}_{b}(\mathbf{x}^{\prime})=1;\mathcal{F}_{b}(\mathbf{x})=0;}}}\\ {{\mathcal{F}_{t}(\mathbf{x}^{\prime})=0,\forall t\in\mathbf{T}\}}\end{array}\tag{3}$$ Here, T = {y2*, ..., y*L} denotes the set of toxic labels, and b = {y1 : Benign} is the non-toxic 4To the authors' best knowledge, no multilabel adversarial attacks exist in the NLP literature. or benign label. {Fb(x′) = 1; Fb(x) = 0} denotes "x′ gets predicted as Benign, though x is toxic". And {Ft(x′) = 0, ∀t ∈ T} denotes x′is not predicted as any toxicity types. In summary, our ToxicTrap attacks focus on perturbing correctly predicted toxic samples. ## 2.3 **Language Constraints In** Toxictrap In Equation (1), we use a set of language constraints to filter out undesirable x′to ensure that perturbed x′ preserves the semantics and fluency of the original x, and with as fewer perturbations as possible. There exist a variety of possible constraints in the NLP literature (Morris et al., 2020b). In ToxicTrap , we decide to use the following list: - Limit on the ratio of words to perturb to 10% - Minimum angular similarity from universal sentence encoder (USE) is 0.84 - Part-of-speech match. ## 2.4 **Greedy Search Strategies In** Toxictrap Solving Equation (1) is a combinatorial search task searching within all potential transformations to find those transformations that result with a successful adversarial example, aka, achieving the fooling goal function and satisfying the constraints. Due to the exponential nature of search space, many heuristic search algorithms existed in the recent literature, including like greedy search, beam search, and population-based search (Zhang et al., 2019). For a seed text x = (w1, . . . , wi*, . . . , w*n), a perturbed text x′can be generated by swapping wi with altered w′i . The main role of a search strategy is to decide what word wi from x to perturb next. We propose to center the search design of ToxicTrap using greedy search with word importance ranking to iteratively replace one word at a time to generate adversarial examples. The main idea is that words of x are first ranked according to an importance function. Four possible choices: (1) "unk " based: word's importance is determined by how much a heuristic score (details later) changes when the word is substituted with an UNK token. (2) "delete ": word's importance is determined by how much the heuristic score changes when the word is deleted from the original input. (3) "weighted saliency" or wt-saliency : words are ordered using a combination of the change in score when the word is substituted with an UNK token multiplied by the maximum score gained by perturbing the word. (4) "gradient ": each word's importance is calculated using the gradient of the victim's loss with respect to the word and taking its L1 norm as the word' importance. After words in x get sorted in the order of descending importance, word wiis then substituted with w′i with allowed transformation. This is until the fooling goal is achieved, or the number of perturbed words reaches upper bound. The heuristic scoring function used in the word importance ranking relates to the victim model and the fooling goal. For instance, when we work with a binary toxicity model, the heuristic score ToxicTrap equals to the model's output score for the target "0:benign" class. Algorithm 1 shows our design of the score function for the multilabel ToxicTrap attacks. In experiments, we also evaluate two other search strategies: "beam search" and "genetic search". Our empirical results indicate strong performance from the greedy search with word importance ranking over other strategies. Algorithm 1 provides the pseudo code of how we implement Equation (3) as a new goal function in the TextAttack python library (Morris et al., 2020a). The implemented *MultilabelClassificationGoalFunction* extends the TextAttack library to multilabel tasks. ## 2.5 **Harden With Adversarial Training** Our ultimate goal of designing ToxicTrap attacks is to improve toxicity NLP models' adversarial robustness. A simple strategy called Adversarial Training (AT ) has been a major defense strategy for improving adversarial robustness (Madry et al., 2018). The vanilla adversarial training process involves augmenting the training data with adversarial examples generated from perturbing the training data in the input space. Two variations of adversarial training exist. (1) If the augmented adversarial examples are generated from a single attack approach, we name this process as AT1 . (2) If the augmented examples are generated from multiple attack methods, we call the training as AT2 . ## 2.6 Toxictrap **Recipes And Extensions** The modular design of ToxicTrap allows us to implement many different ToxicTrap attack recipes in a shared framework, combining different goal functions, constraints, transformations and search strategies. In Section 3, we conduct a thorough empirical analysis to compare possible ToxicTrap recipes and recommend unk greedy search and the composite transformation for most use cases. Table 1 lists our recommended recipe. Algorithm 1 Attack with *MultilabelClassificationGoalFunction* Input: An original text x, a multilabel classifier F, a set of targeted labels as T (for which scores are to be maximized), a set of other labels as N (for which scores are to be minimized); maximization threshold ϵmaximize = 0.5; search method S() := Greedy-WIR, transformations T , a set of constraints as C, and the max number of trials I. Output: adversarial example x′, Attack Status = {Fail, Success} 1: Initialize x′ ← *None*, goal *← −∞*, ϵminimize = 1 − ϵmaximize 2: for trial i = 1*, . . . , I* do 3: x˜ ← T (*x, S*(goal*, x, i*)) 4: if ∀C ∈ C, C(x, x˜) is not True **then** 5: Continue 6: **end if** **and in** scores $\leftarrow$ _sigmoid_( $\mathcal{F}(\tilde{x})$ ) goal ${}^{\prime}$ $\leftarrow$ $\sum_{l\in L_{\max}}$ scores$[$l$] + $\sum_{l\in L_{\min}}$ (1 - scores$[l]$) if goal ${}^{\prime}$ $>$ goal then 10: goal ← goal′# search S() will use goal value 11: if scores[l] > ϵmaximize for ∀l ∈ T and scores[l] < ϵminimize for ∀l ∈ N **then** 12: x′ ← x˜ 14: **end if** 15: **end if** 16: **end for** 17: **return** x′, Attack Failed ## 1 Introduction The _Fractional State_ of the Universe is a fundamental problem in cosmology. The _Fractional State_ of the Universe is a fundamental problem in cosmology. Besides, we select to adapt another five SOTA combinations of transformation and constraints from popular general NLP adversarial example recipes in the literature to create ToxicTraps Extend attacks in Table 14, covering a good range of transformations and constraints. Table 15 shows generated adversarial examples using these attacks. ## 3 **Experiments** We conducted a series of experiments covering three different toxicity classification tasks: binary, multilabel, and multiclass; over two different transformer architectures: DistillBERT and BERT; and across two datasets: the large-scale Wikipedia Talk Page dataset- Jigsaw (Jig, 2018) and the Offensive Tweet for hate speech detection dataset (Davidson | Attack Recipe | Constraints | Transformation | Search Method | |----------------------------------------------|------------------------------------------------------------------------------------------|-----------------------|-----------------| | ToxicTrap | general | USE sentence encoding | | | recommendation | angular similarity > 0.84, Part-of-speech match, Ratio of number of words modified < 0.1 | {Random Character Insertion, Random Character Deletion, Neighboring Character Swap, Character Substitution by Homoglyph, Word Synonym Replacement with 20 nearest neighbors in the counter-fitted GLOVE word embedding space} | Greedy word important ranking ( unk - based) | | Table 1: Recommended ToxicTrap Attack Recipe | | | | | Search | unk | unk | unk | unk | unk | |----------------------------------------------------------------------------|----------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------------------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------------------| | POS | x | x | x | x | | | Transformation (N) | glove (20) + chars | glove (20) + chars | wordnet (-) | mlm (20) | glove (50) | | Seed Text thank god FOB didnt come back doing some shitty arena tour... | (0.910) thank god FOB didnt come back doing some foul arena tour... | | | | | | (0.588) Jews and | | | | | | | Jews and white race | white race traitors | | | | | | traitors have to be... | have to be ... (0.911) thank god FOB didnt come back doing some bollocks arena tour... | (0.906) thank god FOB didnt come back doing some lousy arena tour... | (0.917) thank god FOB didnt come back doing some stinky arena tour... | (0.908) thank god FOB didnt come back doing some wild arena tour... | (0.477) Jews and white race betrayers have to be... | | Fluffy is a retard homeboy is calling a baby... | (0.588) Jews and white race traitors have to be... | (0.507) Jews and white race betrayer have to be... | (0.557) Jews and white race members have to be... | (0.520) Fluffy is a moron homeboy is calling a baby... | | | ugh your with Tebow? You're officially a fag now and in followed. | (0.566) Fluffy is a retarded homeboy is calling a baby... | (0.520) Fluffy is a moron homeboy is calling a baby... | (0.520) Fluffy is a moron homeboy is calling a baby... | (0.915) Fluffy is a child homeboy is calling a baby... | (0.907) ugh your with Tebow? You're officially a poofter now and in followed. | | (0.897) ugh your with Tebow? You're officially a poof now and in followed. | (0.897) ugh your with Tebow? You're officially a poof now and in followed. | (0.908) ugh your with Tebow? You're officially a fatigue now and in followed. | (0.907) ugh your with Tebow? You're officially a wolf now and in followed. | | | (0.907) ugh your with Tebow? You're officially a **poofter** now and in followed. Table 2: Selected toxic adversarial examples generated attacking HTweet-MC model. Perturbed scores are reported in parenthesis. Adversarial examples were generated using unk search method; with and without POS constraint; and using three word synonym substitution transformations with number of nearest neighbors specified in parenthesis; chars indicates that character transformations were applied. More in Table 9. et al., 2017). Table 3 lists two datasets' statistics. Section A.4 provides more details. Base Toxicity Models: Our experiments work on three base models, including {Jigsaw-BL , Jigsaw-ML , HTweet-MC } to cover three types of toxicity prediction tasks. See details in Section A.5. Implementation: We implement all of our ToxicTrap and ToxicTraps Extend attacks using the NLP attack package TextAttack5(Morris et al., 2020a). When generating adversarial examples, we only attack seed samples that are correctly predicted by a victim model. (Adversarial attack does not make sense if the target model could not predict the seed sample correctly!). In our setup, this means we only use toxic seed samples when attacking three base models. This set up simulates real-world situations in which people intentionally create creative toxic examples to circumvent toxicity detection systems. Evaluation Metrics: We use attack success rate (ASR = \# of successful attacks \# of total attacks ) to measure how suc-5TextAttack https://github.com/QData/TextAttack. cessful each ToxicTrap attacking a victim model. To measure the runtime of each algorithm, we use the average number of queries to the victim model as a proxy. We also report the average percentage of words perturbed from an attack. In addition, for models trained with adversarial training, we report both, the model prediction performance and model robustness (by attacking robust model again). ## 3.1 **Results On Attacking 3 Toxicity Predictors** Table 2 provides a few selected adversarial examples generated by attacking HTweet-MC model with five variations of ToxicTrap . The first column provides seed text that was used to generate adversarial examples. Several observations can be made when comparing these examples. It is important to use POS constraint to generate syntactically correct examples. For the first seed text, a recipe without POS constraint (second column) produced replacement word "bullocks", while including POS constraint in the recipe (third column) produced syntactically correct example with replacement word "lousy". We also see that using glove for word synonym | Dataset | Train | Dev | Test | Test Toxic Samples | Train Toxic Samples | |-----------------------------------------|---------|-------|--------|----------------------|-----------------------| | Jigsaw (Jig, 2018) | 1.48MM | 185k | 185k | 8,909 | 71,273 | | Offensive Tweet (Davidson et al., 2017) | 20k | 2.2k | 2.5k | 1897 (offensive) + | 15510 + 1142 | | 145 (hateful) | | | | | | Table 3: Overview of the data statistics. gradient 67.16 63.51 24.13 gradient x 67.56 49.67 24.08 delete 71.46 58.78 18.96 delete x 71.46 48.17 19.37 unk 72.38 58.99 18.80 unk x 72.23 48.04 19.43 wt-saliency x 74.71 178.91 18.81 genetic x 80.49 1025.66 21.86 beam x 90.07 442.18 18.76 Table 5: Effect of different search strategies on attack performance. Search column identifies type of search method. POS column identifies if part-of-speech matching constraint is used. The composite transformation is used: glove with N = 20 plus the character transformations. Results from rows on "unk + POS" can compare with "unk " rows in Table 6. substitution is a better choice than mlm or wordnet . For the second and third seed text, mlm did not generate toxic phrases. In addition, we see that the recipe using glove (50) (last column) often generates similar examples as the glove (20) (third column). Finally, we observe that using character manipulation can generate adversarial examples with the same toxic meaning that fool the classifier. | Base Model | Jigsaw-BL | Jigsaw-ML | HTweet-MC | |----------------|-------------|-------------|-------------| | −→ Dataset | Jigsaw | Jigsaw | HateTweet | | Classification | Binary | Multilabel | Multiclass | | Architecture | DistillBERT | DistillBERT | BERT | | LearningRate | 2.06e-05 | 3.80e-05 | 2.66e-05 | | Epochs | 5 | 10 | 10 | wordnet - 56.66 32.08 17.94 glove 5 33.65 21.82 23.06 glove 20 66.70 41.06 18.85 glove 50 69.99 81.14 18.18 mlm 20 65.23 33.66 21.30 Table 6: Comparing synonym transformations only. No character transformations used. Reporting attack performance when using unk greedy search. The same constraints as in Table 5 with POS (part-of-speech) match. For the second seed text, character transformation (second and third columns) generates replacement word "traitors" where the second "t" is replaced with a monospace Unicode character "t". | Attack | | | | | |--------------------------------------------------------------------|--------------------------|--------|--------------|----------------| | Task | Search | POS | Success Rate | Average Number | | of Queries | Average Perturbed Word % | | | | | gradient | 98.74 | 34.68 | 7.58 | | | gradient | x | 98.72 | 26.78 | 7.06 | | delete | 99.42 | 55.38 | 7.38 | | | delete | x | 99.21 | 48.03 | 6.80 | | unk | 99.32 | 55.11 | 7.12 | | | unk | x | 99.27 | 47.62 | 6.76 | | wt-saliency x | 99.19 | 407.43 | 6.71 | | | genetic | x | 92.67 | 846.41 | 8.73 | | beam | x | 99.68 | 658.55 | 6.95 | | Jigsaw-BL | gradient | 97.62 | 38.45 | 8.36 | | gradient | x | 97.72 | 29.78 | 7.56 | | delete | 98.71 | 57.60 | 7.54 | | | delete | x | 98.63 | 49.93 | 6.99 | | unk | 98.75 | 57.08 | 7.70 | | | unk | x | 98.75 | 49.38 | 6.96 | | wt-saliency x | 98.51 | 419.58 | 6.91 | | | genetic | x | 88.91 | 876.81 | 8.82 | | beam | x | 99.54 | 756.05 | 7.19 | | Jigsaw-ML | gradient | 67.16 | 63.51 | 24.13 | | gradient | x | 67.56 | 49.67 | 24.08 | | delete | 71.46 | 58.78 | 18.96 | | | delete | x | 71.46 | 48.17 | 19.37 | | unk | 72.38 | 58.99 | 18.80 | | | unk | x | 72.23 | 48.04 | 19.43 | | wt-saliency x | 74.71 | 178.91 | 18.81 | | | genetic | x | 80.49 | 1025.66 | 21.86 | | beam | x | 90.07 | 442.18 | 18.76 | | Table 5: Effect of different search strategies on attack HTweet-MC | | | | | | Training | AUC | AP | F1 | Recall | |------------|-------|-------|-------|----------| | No AT | 0.935 | 0.786 | 0.73 | 0.71 | | AT1-delete | 0.936 | 0.792 | 0.74 | 0.719 | | AT1-unk | 0.938 | 0.785 | 0.738 | 0.723 | | AT2 | 0.932 | 0.778 | 0.685 | 0.641 | ## 3.2 **Comparing Constraints In** Toxictrap Then we study the effect of the part-of-speech match (POS) constraint on the attack performance. Table 5 shows that the use of POS constraint lowers average number of queries sent to the victim model. We observe this phenomena across all three tasks and all three search methods (gradient , delete , unk ). For example, when attacking Jigsaw-ML model using unk with and without POS constraint, average number of queries are 49.38 and 57.08, respectively. We also observe that for most of the recipes, using POS constraint slightly decreases attack success rate (ASR). Considering the empirical results in Table 5 and the anecdotal examples in Table 2, we conclude POS constraint is preferred. | Attack | | | | | | |---------------------------------------------------------------|--------------------------|-------|--------------|----------------|-------| | Task | Transformation | N | Success Rate | Average Number | | | of Queries | Average Perturbed Word % | | | | | | wordnet | - | 89.59 | 34.87 | 6.66 | | | glove | 5 | 86.04 | 30.55 | 7.06 | | | glove | 20 | 96.75 | 41.87 | 6.68 | | | glove | 50 | 98.38 | 64.17 | 6.55 | | | mlm | 20 | 93.29 | 39.57 | 6.55 | | | Jigsaw-BL | wordnet | - | 87.72 | 37.01 | 6.81 | | glove | 5 | 84.33 | 32.15 | 7.47 | | | glove | 20 | 95.92 | 43.84 | 6.89 | | | glove | 50 | 97.91 | 67.09 | 6.68 | | | mlm | 20 | 92.73 | 41.33 | 6.62 | | | Jigsaw-ML | wordnet | - | 56.66 | 32.08 | 17.94 | | glove | 5 | 33.65 | 21.82 | 23.06 | | | glove | 20 | 66.70 | 41.06 | 18.85 | | | glove | 50 | 69.99 | 81.14 | 18.18 | | | mlm | 20 | 65.23 | 33.66 | 21.30 | | | Table 6: Comparing synonym transformations only. No HTweet-MC | | | | | | Attack Average Average Training Search Success Number of Perturbed Rate Queries Word % No AT delete 71.46 48.17 19.37 No AT unk 72.23 48.04 19.43 AT1-delete delete 21.97 69.68 28.83 AT1-unk unk 11.17 74.71 33.07 AT2 delete 8.28 75.71 27.89 AT2 unk 6.08 77.91 33.24 Table 8: Effect of adversarial training on attack performance. Results for HTweet-MC model are reported. When attacking, ToxicTrap uses glove with N = 20 plus character transformations; constraint with POS; and search with two different greedy search methods. ## 3.3 **Comparing Search In** Toxictrap Table 5 also compares the effect of using different search methods on the attack performance. It shows that a greedy search method is preferred over genetic and beam . For example, when compared to unk , genetic and beam require almost 10x as many queries on average for all three tasks. The beam search results in higher ASR values on all three tasks, while genetic only outperforms greedy methods when attacking HTweet-MC . In addition, attacking Jigsaw-BL and Jigsaw-ML , beam only slightly outperforms greedy methods. Among the greedy search methods, unk is a good choice, as it provides consistently good ASR performance on all three tasks. It is worth noting that unk outperforms other three greedy search methods, except for wt-saliency when attacking HTweet-MC . However, attacking HTweet-MC model with wt-saliency requires more than 3x as many queries as the unk method. ## 3.4 **Comparing Synonym Transformations** Now we compare word synonym substitutions, when the unk search method is used and the character manipulation are not (to single out the effect). Table 6 shows that glove with N = 20 nearest neighbors is an optimal choice for all three tasks. We observe that the wordnet and mlm transformations result in lower ASRs than glove . Also, glove with N = 50 only slightly lifts ASRs when compared to glove with N = 20. At the same time, using N = 50 nearest neighbors sends over 50% more queries to the victim models. We include the analysis of using different transformations with three different search methods (delete , unk , wt-saliency ) in the Appendix in Table 10. These results also confirm that the choice of glove with N = 20 is preferred. ## 3.5 **Results From Adversarial Training** Empirically, we explore how AT1 and AT2 impact both prediction performance and adversarial robustness. Table 7 and Table 8 present AT results on the HTweet-MC task (Table 12 and Table 13 on two other tasks). In Table 7, we observe that AT1-delete and AT1-unk both maintain the regular prediction performance as the base model. Table 8 shows the attack success metrics when we use ToxicTrap to attack the retrained robust HTweet-MC models. The AT1 models trained from using AT1-delete and AT1-unk attacks show significant improvements in robustness after AT1 adversarial training. We recommend readers to use AT1-unk as their default choice for hardening general toxic language predictors, since in both tables, AT1-unk outperforms AT1-delete slightly. For the AT2 robust model, ToxicTrap attacks are "unseen" (we used the five ToxicTraps Extend attacks from Section B to create the AT2 model in our experiments). Our results show AT2 can harden HTweet-MC model not only against attacks it is trained on (ToxicTraps Extend ) but also against attacks it has not seen before (ToxicTrap ). This could be attributed to the hypothesis that an unseen attack may share similar underlying patterns with the attack ensemble that AT2 model has used. In Table 7, AT2 slightly under-performs base on regular predictions, since it was trained with more adversarial examples from multiple attacks. ## 4 **Conclusion** Text toxicity prediction models are not designed to operate in the presence of adversaries. This paper proposes a suite of ToxicTrap attacks to identify weaknesses in SOTA toxicity language predictors that could potentially be exploited by attackers. We also evaluate how adversarial training improves model robustness across seen and unseen attacks. As next steps, we plan to investigate other strategies like virtual adversarial training, disentangled representation learning or generative methods and pinpoint how they will influence the robustness of toxicity predictors (Qiu et al., 2022). ## References 2018. Toxic comment classification challenge. https://www.kaggle.com/competitions/ jigsaw-toxic-comment-classification-challenge. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. *arXiv preprint arXiv:1804.07998*. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, and Alexey Kurakin. 2019. On evaluating adversarial robustness. CoRR, abs/1902.06705. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. *CoRR*, abs/1803.01128. Home Affairs Committee et al. 2017. Hate crime: Abuse, hate and extremism online. *Fourteenth Report of Session 2016*, 17. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17, pages 512–515. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. *CoRR*, abs/1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion, page 29–30, New York, NY, USA. Association for Computing Machinery. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for text classification. In ACL. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. *2018* IEEE Security and Privacy Workshops (SPW), pages 50–56. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. Sreyan Ghosh and Sonal Kumar. 2021. Cisco at semeval-2021 task 5: What's toxic?: Leveraging transformers for multiple toxic span extraction from online comments. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. *arXiv preprint arXiv:1412.6572*. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509. Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving google's perspective API built for detecting toxic comments. CoRR, abs/1702.08138. Mai Ibrahim, Marwan Torki, and Nagwa El-Makky. 2018. Imbalanced toxic comments classification using data augmentation and deep learning. In *2018* 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 875–878. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural language attack on text classification and entailment. ArXiv, abs/1907.11932. NF Johnson, R Leahy, N Johnson Restrepo, N Velasquez, M Zheng, P Manrique, P Devkota, and Stefan Wuchty. 2019. Hidden resilience and adaptive dynamics of the global online hate ecology. *Nature*, 573(7773):261–265. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2019. Textbugger: Generating adversarial text against real-world applications. *ArXiv*, abs/1812.05271. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. *CoRR*, abs/1907.11692. Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. 2019. Hate speech detection: Challenges and solutions. PLoS ONE, 14:1–16. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning* Representations. John Morris, Eli Lifland, Jin Yong Yoo, and Yanjun Qi. 2020a. TextAttack: A framework for adversarial attacks in natural language processing. *ArXiv*, abs/2005.05909. John X. Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020b. Reevaluating adversarial examples in natural language. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, WWW '16, page 145–153, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Shilin Qiu, Qihe Liu, Shijie Zhou, and Wen Huang. 2022. Adversarial attack and defense technologies in natural language processing: A survey. *Neurocomputing*, 492:278–307. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085– 1097, Florence, Italy. Association for Computational Linguistics. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations. Zhongguo Wang and Bao Zhang. 2021. Toxic comment classification based on bidirectional gated recurrent unit and convolutional neural network. *ACM Trans.* Asian Low-Resour. Lang. Inf. Process., 21(3). Tong Xiang, Sean MacAvaney, Eugene Yang, and Nazli Goharian. 2021. Toxccin: Toxic content classification with interpretability. Jin Yong Yoo and Yanjun Qi. 2021. Towards improving adversarial training of nlp models. Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Çagrı Çöltekin. ˘ 2020. SemEval-2020 task 12: Multilingual offensive language identification in social media (OffensEval 2020). In *Proceedings of the Fourteenth Workshop on* Semantic Evaluation, pages 1425–1447, Barcelona (online). International Committee for Computational Linguistics. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combinatorial optimization. In *Proceedings of the 58th Annual Meeting of the Association for Computational* Linguistics, pages 6066–6080, Online. Association for Computational Linguistics. Wei Emma Zhang, Quan Z. Sheng, and Ahoud Abdulrahmn F. Alhazmi. 2019. Generating textual adversarial examples for deep learning models: A survey. CoRR, abs/1901.06796. Zhixue Zhao, Ziqi Zhang, and Frank Hopfgartner. 2019. Detecting toxic content online and the effect of training data on classification performance. *EasyChair* Preprints. ## A **Appendix** A.1 **Related Works** To the authors' best knowledge, the toxicity NLP literature includes very limited studies on adversarial attacks. Only one study from Hosseini et al. (2017) tried to deceive Google's perspective API for toxicity identification by misspelling the abusive words or by adding punctuation between the letters. This paper tries to conduct a comprehensive study by introducing a wide range of novel attack recipes and improving adversarial training to enable robust text toxicity predictors. Next, we also want to point out existing studies on text adversarial examples center in generating adversarial examples against binary and multi-class classification models (Morris et al., 2020a). To the authors' best knowledge, no multilabel adversarial attacks exist in the NLP literature. Our work is the first that designs novel attacks against the multilabel toxicity predictors. The design of multilabel adversarial examples is challenging since coordinating multiple labels all at once and quantifying the attacking goals is tricky because it is harder to achieve multiple targeted labels. Simply adapting attacks for binary or multiclass models (Morris et al., 2020a) to multilabel setup is not feasible. In multilabel prediction, each instance can be assigned to multiple labels. This is different from the multi-class setting in which classes are mutually exclusive and one sample can only associate to one class (label). The existence of multiple labels at the same time provides better opportunities for attackers, but also posts design challenges. Our design in Equation (3) and Algorithm 1 has paved a path for multilabel text adversarial example research. ## A.2 **Toxicity Detection From Text** The mass growth of social media platforms has enabled efficient exchanges of opinions and ideas between people with diverse background. However, this also brings in risk of user generated toxic contents that may include abusive language, hate speech or cyberbullying. Toxic content may lead to incidents of hurting individuals or groups (Johnson et al., 2019), calling for automated tools to detect toxicity for maintaining healthy online communities. Automatic content moderation uses machine learning techniques to detect and flag toxic content It is critical for online platforms to prohibit toxic language, since such content makes online communities unhealthy and may even lead to real crimes (Johnson et al., 2019; Committee et al., 2017). Past literature on toxicity language detection has centered on a supervised classification formulation (Zhao et al., 2019; Djuric et al., 2015; Nobata et al., 2016; MacAvaney et al., 2019). We denote F : *X → Y* as a supervised classification model, for example, a deep neural network classifier. X denotes the input language space and Y represents the output space. For a sample (x, y), x ∈ X denotes the textual content 6and y ∈ Y denotes its toxicity label(s). The toxicity detection task varies with what y stands for. The literature has included three main cases: (1) binary toxicity detector, here y from {0: benign,1:toxic}; (2) multilabel toxicity detection which assigns a set of target labels for x. Here y = {y1, y2*, ..., y*L} is a vector of L binary labels and each yi ∈ {0, 1} (Zhao et al., 2019). For example in the Jigsaw dataset (Jig, 2018), each text sample associates with six binary labels per sample, namely {benign, obscene, identity attack,insult,threat, and sexual explicit}; (3) multiclass toxicity classification, and y is a discrete integer. For example, the Offensive Tweet dataset (Davidson et al., 2017) has three classes - {0:benign, 1:offensive, and 2:hate} (one class per sample). The literature on toxicity has been mostly focused on improving accuracy, via feature engineering (Zampieri et al., 2020), deep representation learning (Wang and Zhang, 2021) and via fine tuning from pretrained large language models (Ghosh and Kumar, 2021). Recently literature has extended to investigate these classifiers' interpretability (Xiang et al., 2021) and fairness (Hartvigsen et al., 2022). ## A.3 **Basics Of Text Adversarial Examples** Research has shown that current deep neural network models lack the ability to make correct predictions on adversarial examples (Szegedy et al., 2014). The field of investigating the adversarial robustness of NLP models has seen growing interest, 6this paper focuses on toxicity detection from text Search unk unk unk unk unk POS x x x x Transformation (N) glove (20) + chars glove (20) + chars wordnet (-) mlm (20) glove (50) Seed Text My neighbors hate me cuz i smoke weed and blast the most ghetto music f... (0.554) My neighbors hate me cuz i smoke pot and blast the most ghetto music from... thank god FOB didnt come back doing some **shitty** arena tour... | Seed Text My neighbors hate me cuz i smoke weed and blast the most ghetto music f... thank god FOB didnt come back doing some shitty arena tour... | (0.554) My neighbors hate me cuz i smoke pot and blast the most ghetto music from... | (0.554) My neighbors hate me cuz i smoke pot and blast the most ghetto music from... | |------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Jews and white race traitors have to be swept clean so the Aryan race ca... | (0.911) thank god FOB didnt come back doing some bollocks arena tour... | (0.906) thank god FOB didnt come back doing some lousy arena tour... | | How ghetto girls pronounce there names... I could cry right now oh my g... | (0.588) Jews and white race traitors have to be swept clean so the Aryan race can... | (0.588) Jews and white race traitors have to be swept clean so the Aryan race can... | | America is white trash and so are all Americans. | (0.474) How ghetto females pronounce there names... I could cry right now oh my g... | (0.474) How ghetto females pronounce there names... I could cry right now oh my g... | | Fluffy is a retard homeboy is calling a baby rattlesnake friend and kis... | (0.863) America is white trash and so are all Americans. | (0.863) America is white trash and so are all Americans. | | More queer lady Templars. That's my point. | (0.566) Fluffy is a retarded homeboy is calling a baby rattlesnake friend and kis... | (0.520) Fluffy is a moron homeboy is calling a baby rattlesnake friend and kissin... | | ugh your with Tebow? You're officially a fag now and in followed. | (0.908) More fairy lady Templars. That's my point. | (0.908) More fairy lady Templars. That's my point. | | (0.897) ugh your with Tebow? You're officially a poof now and in followed. | (0.897) ugh your with Tebow? You're officially a poof now and in followed. | | (0.910) thank god FOB didnt come back doing some foul arena tour... Jews and white race traitors have to be swept clean so the Aryan race ca... (0.477) Jews and white race **betrayers** have to be swept clean so the Aryan race ca... How ghetto **girls** pronounce there names... I could cry right now oh my g... (0.474) How ghetto females pronounce there names... I could cry right now oh my g... America is **white** trash and so are all Americans. (0.897) America is pristine trash and so are all Americans. Fluffy is a **retard** homeboy is calling a baby rattlesnake friend and kis... (0.520) Fluffy is a moron homeboy is calling a baby rattlesnake friend and kissin... More **queer** lady Templars. That's my point. (0.910) More cheerful lady Templars. That's my point. ugh your with Tebow? You're officially a fag now and in followed. | (0.554) My neighbors hate me cuz i smoke pot and blast the most ghetto music from... (0.917) thank god FOB didnt come back doing some stinky arena tour... (0.507) Jews and white race betrayer have to be swept clean so the Aryan race can... (0.588) How ghetto daughter pronounce there names... I could cry right now oh my ... (0.899) America is ashen trash and so are all Americans. (0.520) Fluffy is a moron homeboy is calling a baby rattlesnake friend and kissin... (0.910) More curious lady Templars. That's my point. (0.908) ugh your with Tebow? You're officially a fatigue now and in followed. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | (0.554) My neighbors hate me cuz i smoke pot and blast the most ghetto music from... (0.908) thank god FOB didnt come back doing some wild arena tour... (0.557) Jews and white race members have to be swept clean so the Aryan race can ... (0.534) How ghetto ers pronounce there names... I could cry right now oh my god h... (0.784) America is utter trash and so are all Americans. (0.915) Fluffy is a child homeboy is calling a baby rattlesnake friend and kissin... (0.901) More likely lady Templars. That's my point. (0.907) ugh your with Tebow? You're officially a wolf now and in followed. | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| (0.907) ugh your with Tebow? You're officially a **poofter** now and in followed. Table 9: Selected Toxic Adversarial Examples. Here we only show adversarial examples generated by attacking base model HTweet-MC on Offensive Tweet text, since it contains much shorter messages than Jigsaw. Perturbed scores for adversarial examples are reported in parenthesis. Adversarial examples were generated using unk greedy search method. POS row marked with an "x" indicates that the part-of-speech matching constraint was used. Transformation row indicates which word substitution method was used (glove , wordnet , mlm ), and number of nearest neighbors N is specified in parenthesis. chars indicates that character transformations were applied. We used different word transformations for synonym substitution with varying number of nearest neighbors (20 or 50). Two recipes used character transformations, while the other three did not. Also, one recipe did not use part-of-speech match (POS) constraint, and it was included in the rest of recipes. All five recipes used unk greedy search method. with a body of new adversarial attacks7 designed to fool question answering (Jia and Liang, 2017), machine translation (Cheng et al., 2018), and text classification systems (Ebrahimi et al., 2017; Jia and Liang, 2017; Alzantot et al., 2018; Jin et al., 2019; Ren et al., 2019; Zang et al., 2020; Garg and Ramakrishnan, 2020). ## A.4 **Datasets Details** A.4.1 **Jigsaw.** This dataset was derrived from the Wikipedia Talk Pages dataset published by Google and Jigsaw on Kaggle (Jig, 2018). Wikipedia Talk Page allows users to discuss improvements to articles via comments. The comments are anonymized and labeled with toxicity levels. Here "obscene", "threat", "insult" and "identity hate" are four sub-labels for "toxic" and "severe toxic" (hence may co-occur for a comment). The "toxic" comments that are not | Task | Search | Transformation | N | Attack Success | Average Number | Average | Time | |------------------------------------------------------------------------------------------------------------------------|------------|------------------|--------|------------------|------------------|-----------|--------| | Rate | of Queries | Perturbed Word % | (s) | | | | | | wordnet | - | 89.29 | 35.35 | 6.68 | 836 | | | | glove | 5 | 85.61 | 30.77 | 7.04 | 765 | | | | glove | 20 | 96.58 | 42.37 | 6.71 | 906 | | | | glove | 50 | 98.27 | 65.05 | 6.55 | 1355 | | | | mlm | 20 | 93.22 | 40.21 | 6.56 | 1882 | | | | delete | wordnet | - | 89.59 | 34.87 | 6.66 | 822 | | | glove | 5 | 86.04 | 30.55 | 7.06 | 749 | | | | glove | 20 | 96.75 | 41.87 | 6.68 | 888 | | | | glove | 50 | 98.38 | 64.17 | 6.55 | 1278 | | | | mlm | 20 | 93.29 | 39.57 | 6.55 | 1727 | | | | Jigsaw-BL | unk | wordnet | - | 90.27 | 152.05 | 6.6 | 4491 | | glove | 5 | 86.74 | 96.98 | 7.0 | 3646 | | | | glove | 20 | 96.99 | 288.58 | 6.65 | 7964 | | | | glove | 50 | 98.65 | 642.58 | 6.53 | 17152 | | | | mlm | 20 | 93.88 | 254.81 | 6.58 | 20093 | | | | wt-saliency | wordnet | - | 87.29 | 37.05 | 6.83 | 959 | | | glove | 5 | 83.81 | 32.21 | 7.46 | 857 | | | | glove | 20 | 95.75 | 44.27 | 6.9 | 1046 | | | | glove | 50 | 97.75 | 68.03 | 6.69 | 1665 | | | | mlm | 20 | 92.76 | 41.83 | 6.63 | 2156 | | | | delete | wordnet | - | 87.72 | 37.01 | 6.81 | 981 | | | glove | 5 | 84.33 | 32.15 | 7.47 | 865 | | | | glove | 20 | 95.92 | 43.84 | 6.89 | 997 | | | | glove | 50 | 97.91 | 67.09 | 6.68 | 1530 | | | | mlm | 20 | 92.73 | 41.33 | 6.62 | 2011 | | | | Jigsaw-ML | unk | wordnet | - | 88.55 | 157.43 | 6.74 | 5331 | | glove | 5 | 84.98 | 100.33 | 7.36 | 3993 | | | | glove | 20 | 96.09 | 297.63 | 6.86 | 8965 | | | | glove | 50 | 97.89 | 662.08 | 6.65 | 19191 | | | | mlm | 20 | 93.45 | 262.76 | 6.72 | 22787 | | | | wt-saliency | wordnet | - | 56.46 | 32.5 | 17.99 | 333 | | | glove | 5 | 33.0 | 21.96 | 23.19 | 283 | | | | glove | 20 | 66.29 | 41.47 | 18.86 | 375 | | | | glove | 50 | 69.29 | 82.19 | 18.15 | 659 | | | | mlm | 20 | 64.98 | 34.34 | 21.34 | 862 | | | | delete | wordnet | - | 56.66 | 32.08 | 17.94 | 329 | | | glove | 5 | 33.65 | 21.82 | 23.06 | 284 | | | | glove | 20 | 66.7 | 41.06 | 18.85 | 378 | | | | glove | 50 | 69.99 | 81.14 | 18.18 | 655 | | | | mlm | 20 | 65.23 | 33.66 | 21.3 | 851 | | | | HTweet-MC | unk | wordnet | - | 55.96 | 80.57 | 17.15 | 752 | | glove | 5 | 34.31 | 44.73 | 22.46 | 575 | | | | glove | 20 | 67.82 | 130.65 | 18.2 | 1073 | | | | glove | 50 | 71.31 | 296.44 | 17.71 | 2137 | | | | mlm | 20 | 65.48 | 104.41 | 21.2 | 2780 | | | | Table 10: Effect of word transformations on attack performance. Comparing synonym transformations only. No wt-saliency | | | | | | | | wordnet - 55.96 80.57 17.15 752 glove 5 34.31 44.73 22.46 575 glove 20 67.82 130.65 18.2 1073 glove 50 71.31 296.44 17.71 2137 mlm 20 65.48 104.41 21.2 2780 Table 10: Effect of word transformations on attack performance. Comparing synonym transformations only. No character transformations used. Reporting attack performance when using delete , unk , and wt-saliency greedy search. The same constraints as in Table 5 with POS (part-of-speech) match. "obscene", "threat", "insult" and "identity hate" are assigned to either "toxic" or "severe toxic". Comments that are not assigned any of the six toxicity labels get int "non toxic". ## A.4.2 **Offensive Tweet.** The authors of (Davidson et al., 2017) used crowdsourced hate speech lexicon from Hatebase.org to collect tweets containing hate speech keywords. Then they used crowd-sourcing to label these tweet samples into three categories: those containing hate speech, only offensive language, and those with neither. ## A.5 **Base Model Setup:** We build three base models, including {Jigsaw-BL , Jigsaw-ML , HTweet-MC } to cover three types of toxicity prediction tasks. Table 4 presents the choice of task, training/test data, transformer architecture and learning rate plus number of epochs. We use "distilbert-base-uncased" pre-trained transformers model for DistilBERT architecture. For BERT architecture, we use "GroNLP/hateBERT" pre-trained model. All texts are tokenized up to the first 128 tokens. The train batch size is 64 and we use AdamW optimizer with 50 warm-up steps and early stopping with patience 2. The models are trained on NVIDIA T4 Tensor ## Algorithm 2 Adversarial Training With At2 Input: Set of all attack recipes Srecipes, number of attack recipes to exclude Nexc, set of attack recipes to use for adversarial training Sattack (created by choosing (|Srecipes| − Nexc) attack recipes from the set Srecipes), number of clean epochs Nclean, number of adversarial epochs Nadv, percentage of dataset to attack γ, attack A(θ, x, y), and training data D = {(x (i), y (i))} n i=1 Output: model weights θ 1: Initialize model weights θ 2: for clean epoch= 1*, . . . , N*clean do 3: Train θ on D 4: **end for** 5: Initialize D′ ← D 6: for attack recipe in Sattack do 7: Dadv *← {}* 8: i ← 1 9: **while** |Dadv| < γ ∗ |D| and i ≤ |D| do 10: x (i) adv ← A(θ, x (i), y (i)) 11: Dadv ← Dadv ∪ {(x (i) adv, y (i))} 12: i ← i + 1 13: **end while** 14: D′ ← D′ ∪ Dadv 15: **end for** 16: Randomly shuffle D′ 17: for adversarial epoch= 1*, . . . , N*adv do 18: Train θ on D′ 19: **end for** Core GPUs and NVIDIA Tesla V100 GPUs with 16 GB memory, 2nd generation Intel Xeon Scalable Processors with 32GB memory and high frequency Intel Xeon Scalable Processor with 61GB memory. ## A.6 **Adversarial Training With Single Or** Multiple Attacks Adversarial training has been a major defense strategy in most existing work for improving adversarial robustness (Madry et al., 2018). The vanilla adversarial training process involves augmenting the training data with adversarial examples generated from perturbing the training data in the input space. Let L(F, x, y) represent the loss function on input text x and label y. Let A(F, x, y) be the adversarial attack that produces adversarial example x′. Then, vanilla adversarial training objective is as follows: $$\begin{array}{c}{{\arg\operatorname*{min}\mathbf{E}_{(\mathbf{x},\mathbf{y})\sim D}[\mathbf{L}({\mathcal{F}}(\mathbf{x}),\mathbf{y})}}\\ {{{\mathcal{F}}}}\\ {{+*\mathbf{L}({\mathcal{F}}(\mathbf{x}^{\prime}=\mathbf{A}({\mathcal{F}},\mathbf{x},\mathbf{y})),\mathbf{y})]}}\end{array}\tag{4}$$ Adversarial training use both clean8and adversarial examples to train a model. This aims to minimize both the loss on the original training dataset and the loss on the adversarial examples. In recent NLP studies, adversarial training (AT ) is only performed to show that such training can make models more resistant to the attack it was originally trained with. This observation is not surprising. The literature has pointed out the importance of robustness against unseen attacks and it is generally recommended to use different attacks to evaluate the effectiveness of a defense strategy (Carlini et al., 2019). Adversarial Training with Multiple Attacks (AT2 ): A simple strategy to revise vanilla AT is to train a model using both clean examples and adversarial examples from different attacks. This, we call Adversarial Training with Multiple (AT2 ), trains a target model on a combination of adversarial examples. AT2 aims to help a model become more robust not only against attacks it is trained on but also the attack recipes it has not seen before. Algorithm 2 presents our pseudo code of AT2 . AT1 vs AT2 : In the rest of the paper, we call vanilla adversarial training as single adversarial training (AT1 ). In Section 3 and Section B our results show that models trained with AT2 can be more effective in protecting against unseen text adversarial attacks compare with AT1 models trained on the same attack. This could be contributed to the hypothesis that an unseen attack may share similar underlying attributes and patterns with the attack ensemble that the model is trained on. Selecting what attacks to use in AT2 is important. Part of the reason we select to adapt the five popular recipes to ToxicTraps Extend in Table 14 is because these attacks cover a good range of popular transformations and constraints. Table 14 includes three word based attacks, two character based attack, plus TT-TBug is a character and word level combination attack. In our experiments, we simulated potential AT2 use cases by leave-one attack out as "unseen" and train AT2 models using the rest. For instance, when a target model never uses examples from TT-TFool in training, the AT2 trained 8*Clean* examples refer to the original training examples. model may have already known certain information on similar word transformations since similar transformations have been used by other attacks in the ensemble. | Toxic | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------|---------------------|----------|----------|-------|-------| | Base Model | Search | POS | Examples | | | | Attacked (%) | Attack Success Rate | Average | | | | | Number of Queries | Average Perturbed | Time (s) | | | | | Word | | | | | | | 98.74 | 34.68 | 7.58 | 849 | | | | gradient | x | 98.72 | 26.78 | 7.06 | 883 | | delete | 99.42 | 55.38 | 7.38 | 915 | | | delete | x | 99.21 | 48.03 | 6.8 | 1070 | | unk | x | 99.27 | 47.62 | 6.76 | 939 | | wt-saliency | x | 99.19 | 407.43 | 6.71 | 10653 | | genetic | x | 92.67 | 846.41 | 8.73 | 21504 | | beam | x | 99.68 | 658.55 | 6.95 | 16043 | | gradient | | | | | | | Jigsaw-BL | 59.76 | 97.62 | 38.45 | 8.36 | 1066 | | gradient | x | 97.72 | 29.78 | 7.56 | 1151 | | delete | 98.71 | 57.6 | 7.54 | 1112 | | | delete | x | 98.63 | 49.93 | 6.99 | 1168 | | unk | x | 98.75 | 49.38 | 6.96 | 1141 | | wt-saliency | x | 98.51 | 419.58 | 6.91 | 12849 | | genetic | x | 88.91 | 876.81 | 8.82 | 26137 | | beam | x | 99.54 | 756.05 | 7.19 | 21290 | | gradient | | | | | | | Jigsaw-ML | 65.46 | 67.16 | 63.51 | 24.13 | 554 | | gradient | x | 67.56 | 49.67 | 24.08 | 581 | | delete | 71.46 | 58.78 | 18.96 | 412 | | | delete | x | 71.46 | 48.17 | 19.37 | 425 | | unk | x | 72.23 | 48.04 | 19.43 | 425 | | wt-saliency | x | 74.71 | 178.91 | 18.81 | 1467 | | genetic | x | 80.49 | 1025.66 | 21.86 | 6560 | | beam | x | 90.07 | 442.18 | 18.76 | 2938 | | Table 11: Effect of with or without part-of-speech constraints when combining with different search strategies gradient HTweet-MC 96.62 | | | | | | | Task | Training | Search | Attack | Average | Average Perturbed | Time (s) | |-----------------------------------------------------------------------------------------------------------------------------|------------|----------|----------|-----------|---------------------|------------| | Success | Number | | | | | | | Rate | of Queries | Word | | | | | | Jigsaw-BL | No AT | unk | 99.27 | 47.62 | 6.76 | 939 | | AT1-unk | unk | 60.28 | 103.54 | 13.03 | 3546 | | | Jigsaw-ML | No AT | unk | 98.75 | 49.38 | 6.96 | 1141 | | AT1-unk | unk | 71.7 | 91.08 | 11.84 | 3106 | | | No AT | delete | 71.46 | 48.17 | 19.37 | 425 | | | No AT | unk | 72.23 | 48.04 | 19.43 | 425 | | | AT1-delete | delete | 21.97 | 69.68 | 28.83 | 598 | | | AT1-unk | delete | 11.17 | 74.71 | 33.07 | 641 | | | AT2 | delete | 8.28 | 75.71 | 27.89 | 672 | | | AT2 | unk | 6.08 | 77.91 | 33.24 | 681 | | | Table 12: Effect of adversarial training on attack performance on three tasks. When attacking, ToxicTrap uses the HTweet-MC | | | | | | | Task Training AUC AP F1 Recall Jigsaw-BL No AT 0.971 0.749 0.823 0.794 AT1-unk 0.971 0.733 0.823 0.811 Jigsaw-ML No AT 0.984 0.636 0.587 0.515 AT1-unk 0.984 0.633 0.594 0.535 No AT 0.935 0.786 0.730 0.710 AT1-delete 0.936 0.792 0.740 0.719 AT1-unk 0.938 0.785 0.738 0.723 AT2 0.932 0.778 0.685 0.641 Table 13: Effect of adversarial training on model performance. Macro-average metrics are reported. | HTweet-MC | |-------------| ## B **Extra On** Toxictrap **Extensions** B.1 **A Suite Of Attack Recipes To Extend** Toxictrap Morris et al. (2020b) splits each text adversarial attack into four parts: goal function, transformation, search strategy and constraints. With this modular design, new NLP attacks frequently consist of just or two new components and often re-use remaining components from past work. We follow this design process, using our newly proposed ToxicTrap goal functions to pair with popular choices of other three components from the literature to get a set of ToxicTraps Extend recipes. We select five popular attack recipes from the literature including DeepWordBug , TextBugger , A2T , PWWS and TextFooler that were popular recipes proposed to attack general language classifiers. By swapping these recipes' goal function with the three goal function we propose in Equation (2) and Equation (3), we construct 15 new ToxicTraps Extend attack recipes as shown in Table 14. This table categorizes different ToxicTraps Extend attack recipes, based on their goal functions, constraints, transformations and search methods. TT-Dbug and TT-TBug are adapted from two SOTA attack recipes DeepWordBug (Gao et al., 2018) and TextBugger (Li et al., 2019). They generate ToxicTraps Extend via character manipulations. These attacks perform character insertion, deletion, neighboring swap and replacements to change a word into one that a target toxicity detection model doesn't recognize. These character changes are designed to generate character sequences that a human reader could easily correct into those original words. Language semantics are preserved since human readers can correct the misspellings. We then select three other popular attacks A2T , PWWS and TextFooler (Yoo and Qi, 2021; Ren et al., 2019; Jin et al., 2019) to ToxicTraps Extend TT-A2T , TT-Pwws and TT-TFool attacks. These attacks generate adversarial examples via replacing words from the input with synonyms. These attacks aim to create examples that preserve semantics, grammaticality, and non-suspicion. They vary regarding the word transformation strategies they use (see Table 14 for details). All ToxicTraps Extend attack recipes use greedy based word importance ranking (GreedyWIR) strategy to search and determine what words to manipulate (with character changes or with synonym replacement). Lastly, these ToxicTraps Extend recipes also have difference in what languages constraints they employ to limit the transformations, for instance, TT-A2T puts limit on the number of words to perturb. TT-TBug uses universal sentence encoding (USE) similarity as a constraint. Table 15 presents a few generated ToxicTraps Extend examples from attacking base models using five ToxicTraps Extend attacks. | Attack Recipe | Constraints | Transformation | Search Method | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|-------------------------------------|-----------------|-------------------------------------| | TT-A2T | Sentencetransformers/all-MiniLML6-v2 sentence encoding cosine similarity > 0.9 † , Part-of-speech match, Ratio of number of words modified < 0.1 | Word Synonym Replacement. | Swap | | | words with their 20 nearest neighbors in the counter-fitted GLOVE word embedding space or optionally with those predicted by BERT MLM model. | Greedy-WIR (gradient-based) | | | | | (revised from (Yoo and Qi, 2021)) TT-TFool (revised from (Jin et al., 2019) | Word embedding cosine similarity > 0.5, Part-ofspeech match, USE sentence encoding angular similarity > 0.84 | Word Synonym Replacement. | Swap | | | words with their 50 nearest neighbors in the counter-fitted GLOVE word embedding space. | Greedy-WIR (deletion-based) | | | | | TT-Pwws (revised | from | (Ren | | | | et al., 2019)) | No special constraints | Word Synonym Replacement. | Swap | Greedy-WIR | | words with synonyms from WordNet. | (saliency) | | | | | TT-Dbug (revised from | (Gao | | | | | et al., 2018)) | Levenshtein edit distance | {Random Character Insertion, Random | | | | < 30 | Character Deletion, Random Character Swap, Random Character Replacement}* | Greedy-WIR (deletion-based) | | | | TT-TBug (revised from (Li et al., 2019)) | USE | sentence | encoding | {Random Character Insertion, Random | | cosine similarity > 0.8 | Character Deletion, Neighboring Character Swap, Character Substitution by Homoglyph ‡ , Word Synonym Replacement with 5 nearest neighbors in the counter-fitted GLOVE word embedding space}* | Greedy-WIR (deletion-based) | | | | Table 14: ToxicTraps Extend Attack recipes categorized regarding: search method, transformation, goal function, constraints. All attack recipes include an additional constraint which disallows the replacement of stopwords. Greedy search with Word Importance Ranking (Greedy-WIR). | | | | | Table 14: ToxicTraps Extend Attack recipes categorized regarding: search method, transformation, goal function, constraints. All attack recipes include an additional constraint which disallows the replacement of stopwords. Greedy search with Word Importance Ranking (Greedy-WIR). † indicates new features or enhancements we added to extend the capability of the original attack recipes proposed in the original papers * indicates a combination of multiple transformations. ‡ substitute a character with visually similar ones or those adjacent on keyboard. For example, substitute "1" with "l". | Seed Text | TT-A2T | TT-TFool | TT-Pwws | TT-Dbug | TT-TBug | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| | My neighbors hate me cuz i smoke weed and blast the most ghetto music f... | My neighbors hate me cuz i smoke weeds and blast the most ghetto music fr... | My neighbors hate me cuz i smoke smoke and blast the most ghetto music fr... | My neighbors hate me cuz i smoke wed and blast the most ghetto music fro... | | | | thank god FOB didnt come back doing some shitty arena tour. #iwaited5ye... | My neighbors hate me cuz i smoke pot and blast the most ghetto music from... | My neighbors hate me cuz i smoke wee and blast the most ghetto music from... | thank god FOB didnt come back doing some crappy arena tour. #iwaited5year... | | | | Fluffy is a retard homeboy is calling a baby rattlesnake friend and kis... | thank god FOB didnt come back doing some crappy arena tour. #iwaited5year... | thank god FOB didnt come back doing some foul arena tour. #iwaited5yearsf... | thank god FOB didnt come back doing some stinky arena tour. #iwaited5year... | thank god FOB didnt come back doing some sitty arena tour. #iwaited5years... | | | Fluffy is a changeling homeboy is calling a baby rattlesnake friend and k... | | | | | | | Fluffy is a brainless homeboy is calling a baby rattlesnake friend and ki... | Fluffy is a moron homeboy is calling a baby rattlesnake friend and kissin... | Fluffy is a Getard homeboy is calling a baby rattlesnake friend and kissi... | Fluffy is a retarded homeboy is calling a baby rattlesnake friend and kis... | | | | Typically hateful, anti-Christian, mentally ill and ugly embankment trash... | | | | | | | Typically hateful, anti-Christian, mentally ill and ugly dyke trash pig ... | Typically hateful, anti-Christian, mentally ill and ugly levee trash pig ... | Typically hateful, anti-Christian, mentally ill and ugly dike trash pig c... | Typically hateful, anti-Christian, mentally ill and ugly dmyke trash pig ... | Typically hateful, anti-Christian, mentally ill and ugly dike trash pig c... How ghetto gilrs pronounce there names... I could cry right now oh my god... | | | More queer lady Templars. That's my point. | How ghetto missy pronounce there names... I could cry right now oh my god... | How ghetto girsl pronounce there names... I could cry right now oh my god... | | | | | How ghetto girls pronounce there names... I could cry right now oh my g... | How ghetto females pronounce there names... I could cry right now oh my g... | How ghetto females pronounce there names... I could cry right now oh my g... | More homo lady Templars. That's my point. | | | | Table 15: Selected Toxic Adversarial Examples. We show adversarial examples generated by attacking base model More homo lady Templars. That's my point. More fairy lady Templars. That's my point. More curious lady Templars. That's my point. More qIeer lady Templars. That's my point. | | | | | |
laskar-etal-2023-ai
{AI} Coach Assist: An Automated Approach for Call Recommendation in Contact Centers for Agent Coaching
https://aclanthology.org/2023.acl-industry.57
In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts, AI can quickly determine which calls are most relevant for coaching purposes, and provide relevant feedback and insights to the contact center manager or supervisor. In this paper, we present {``}AI Coach Assis{''}, which leverages the pre-trained transformer-based language models to determine whether a given call is coachable or not based on the quality assurance (QA) queries/questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an efficient and effective way to determine which calls are most relevant for coaching purposes. Extensive experimental evaluation demonstrates the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents.
# Ai Coach Assist: An Automated Approach For Call Recommendation In Contact Centers For Agent Coaching Md Tahmid Rahman Laskar, Cheng Chen, Xue-Yong Fu, Mahsa Azizi, Shashi Bhushan TN, **Simon Corston-Oliver** Dialpad Canada Inc. Vancouver, BC, Canada {tahmid.rahman,cchen,xue-yong}@dialpad.com {mahsa.azizi,sbhushan,scorston-oliver}@dialpad.com ## Abstract In recent years, the utilization of Artificial Intelligence (AI) in the contact center industry is on the rise. One area where AI can have a significant impact is in the coaching of contact center agents. By analyzing call transcripts using Natural Language Processing (NLP) techniques, it would be possible to quickly determine which calls are most relevant for coaching purposes. In this paper, we present "AI Coach Assist", which leverages the pre-trained transformerbased language models to determine whether a given call is coachable or not based on the quality assurance (QA) questions asked by the contact center managers or supervisors. The system was trained and evaluated on a large dataset collected from real-world contact centers and provides an effective way to recommend calls to the contact center managers that are more likely to contain coachable moments. Our experimental findings demonstrate the potential of AI Coach Assist to improve the coaching process, resulting in enhancing the performance of contact center agents. ## 1 Introduction AI has the potential to revolutionize many industries, including the contact center industry. With the growing demand for high-quality customer service, contact centers are constantly seeking ways to improve their processes and enhance their agents' performance. One way to achieve this goal is by providing effective coaching and feedback to agents, which can help them identify areas of improvement and develop the necessary skills to provide exceptional customer service. As a common practice, contact center managers or supervisors manually select call recordings to listen in, and grade agents' performance using a rubric that contains questions such as "*did the agent greet the customer by name*" or "did the agent properly resolve the customer issue" to score the call in order to verify if the agent is following the company's preferred protocol. The grades given by the managers along with their comments are then shared with the agents to improve their performance. However, with the large volume of calls that contact centers receive, it is very challenging for managers or supervisors to determine which calls are most important for agent coaching. Thus, the traditional approaches to randomly select calls for agent coaching has the following limitations: - **Time-consuming process:** Coaching agents can be a time-consuming process, particularly for managers and supervisors who must manually review large numbers of calls to identify which calls are most relevant for coaching. - **Inefficient use of resources:** Without an efficient and effective process for determining which calls are most relevant for coaching, resources may be wasted on calls that are not critical for improving agent performance. This is where NLP could be useful. By analyzing call transcripts using NLP models, it could be possible to recommend calls to the contact center managers/supervisors that are most relevant for coaching purposes. This will lead to an improved coaching experience by prioritizing the calls for analysis that are more likely to contain coachable moments, resulting in saving time for the contact center managers as well as improving agent performance, ultimately leading to better customer satisfaction. For the purpose of improving realworld contact centers, we present the AI Coach Assist system to assist contact center managers or supervisors by suggesting calls that could be more useful for agent coaching. In this paper, we explore the concept of our proposed AI Coach Assist system, which leverages the advantage of fine-tuning a pre-trained transformerbased language model (Devlin et al., 2019; Sanh et al., 2019; Liu et al., 2019; Lan et al., 2020; 599 Zhong et al., 2022). Moreover, we provide a detailed overview of its development process (implementation and preparation of a balanced dataset to avoid biases), as well as our experimental findings. In addition, we demonstrate how it could be productionized in real-world contact centers to assist managers/supervisors. Note that our model does not automate the scoring of employee performance or replace human review. Instead, our model is intended to help contact center supervisors by recommending calls for coaching their employees instead of the traditional random sampling of calls. ## 2 Related Work The significant performance gain achieved via leveraging transformer-based language models (Vaswani et al., 2017; Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020) in a wide range of NLP tasks in recent years has also led to the use of transformer-based models in the contact center industry (Laskar et al., 2022b,c,a; Khasanova et al., 2022). The successful deployment of these models in industries has helped many organizations to enhance their processes, resulting in improved customer satisfaction. In recent years, several studies (Fu et al., 2022a,b) have explored the potential of AI-powered call analysis (e.g., entity recognition, sentiment analysis, etc.), along with providing realtime assistance to contact center agents. In addition to these studies, several commercial solutions have been developed that offer AIpowered call analysis and AI assistance for agents in contact centers. Some of these solutions also offer real-time feedback to agents during calls123, allowing them to adjust their behavior and improve their performance in real-time. However, to the best of our knowledge, there is no prior commercial application that assists contact center managers by suggesting calls that could be the most useful to coach agents. One potential approach for this purpose could be the use of automatic call recommendation, where calls are analyzed using NLP techniques and suggested to the contact center managers based on various factors, such as agents' behavior, issue resolution, customer satisfaction, sales success, etc. These suggested calls can then be analyzed by the managers for coaching purposes to provide relevant feedback to agents. In this regard, we propose AI Coach Assist, a system that leverages the transformer architecture to effectively analyze the full call transcripts in contact centers and recommends contact center managers with calls that are more likely to contain coachable moments for a given query. In the following section, we describe how we construct a dataset, which we denote as QA Scorecard, to train and evaluate our proposed AI Coach Assist system. ## 3 The Qa Scorecard Dataset We collected our data from real-world contact centers. The dataset consists of customer-agent call conversation transcripts generated using Automatic Speech Recognition (ASR) systems, along with annotations indicating whether a call is coachable or not. The process of annotating the dataset was carefully designed and implemented, as the annotations were performed by real-world contact center managers and supervisors who analyzed the whole conversation/transcript. In this way, we ensure the high quality of the dataset. The data annotation works as follows, the managers/supervisors assign a score to the call based on the performance of the agent for a particular question. We consider a call as coachable for a particular question if the call achieves less than 50% scores, otherwise, we consider the call for that particular question as not coachable. The dataset was collected over a period of one year and includes a diverse range of call types from different industries, with a variety of customer interactions, reflecting the real-world complexities of the contact center industry. The resulting dataset consists of a large number of call transcripts and annotations, providing a robust representation of real-world customeragent interactions. Note that a total of 58 questions are curated, which are distributed among training, validation, and test sets. While constructing the training, validation, and test splits, we observe that the class distribution (whether coachable or not coachable) for many question-transcript pairs was imbalanced. Thus, to ensure an unbiased dataset (as well as to avoid model overfitting), for each question, we ensured that the ratio between *coachable* and not coachable classes (or vice-versa) to be at most 1:2. In Table 1, we describe the distribution of | Split | Total Samples | Not Coachable | Coachable | Avg. Question Length | Avg. Transcript Length | |------------|-----------------|-----------------|-------------|------------------------|--------------------------| | Training | 12065 | 6521 | 5544 | 9.77 | 659.53 | | Validation | 1653 | 891 | 762 | 9.62 | 664.55 | | Test | 3435 | 1855 | 1580 | 9.77 | 727.77 | Table 1: Data distribution on each split (train/valid/test) based on the total number of question-transcript pairs, coachable and *not coachable* labels, and the average length of questions and transcripts. | Question Type | Example Question | |--------------------------------|--------------------------------------------------------------------| | Account Verification | Did the agent verify the customer's email address? | | Addressing Customer | Did the agent use the customer's name appropriately? | | Behavioral | Did the agent show proper empathy statements? | | Closing | Did the agent properly end the call? | | Providing Complete Information | Did the agent mention the payment terms in detail? | | Customer Identification | Did the agent verify the customer's information? | | Customer Satisfaction | Was the customer happy? | | Greeting | Did the agent properly greet the customer? | | Information Collection | Did the agent collect all necessary information from the customer? | | Issue Identification | Could the agent properly identify the issue? | | Issue Resolution | Could the agent resolve the issue? | Table 2: Example Questions based on Question Types our dataset based on our training, validation, and test set. Meanwhile, to evaluate the performance of AI Coach Assist based on the type of the questions, we also categorize the questions into 11 types using human annotators. We show the question types with example questions for each type in Table 2. ## 4 Our Proposed Approach We treat the AI Coach Assist model as a text classification model that combines the query/question given by the contact center manager or supervisor with the call transcript to predict whether a given call is coachable or not. Due to the recent success of fine-tuning pre-trained transformer models for text classification (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020), we also leverage the pretrained language models based on the transformer architecture for this task. As we are doing text classification instead of generation, we give input data to the pre-trained language model as follows (see Figure 1): at first, we create a text sequence by concatenating the question and the call transcript. Then, this concatenated text sequence is given as input to the language model to learn the contextual relationship between the sentences. The pre-trained transformer language model is fine-tuned to output a probability score for each input sequence, indicating the likelihood that the call is coachable or not, for the given question. Whether a question-transcript pair is coachable or *not coachable* is determined based on the probability score of the class having the higher score. Since our objective is to build the AI Coach Assist system for real-world contact centers, we consider the following two cases while selecting the pre-trained language models: (i) Utilize a model to ensure high efficiency: We choose DistilBERT (Sanh et al., 2019) for this scenario. DistilBERT is a distilled version of BERT (Devlin et al., 2019), designed to be smaller and faster while retaining a similar level of performance. Despite its smaller size, DistilBERT has been shown to perform similarly to BERT on many NLP tasks, making it a suitable alternative for many NLP applications. This makes it a popular choice for real-world scenarios where computational resources are limited but the preference is to deploy a fast and optimize model in production. (ii) Utilize a model to ensure higher accuracy: For this purpose, we leverage the DialogLED model (Zhong et al., 2022), which was pre-trained ![3_image_0.png](3_image_0.png) on long dialog conversations, having more similarities with our customer-agent conversation dataset. Though in comparison to DistilBERT, the DialogLED model may require higher computational resources for production deployment, it fulfills our criteria of using a model that may provide higher accuracy for being pre-trained on long dialog conversations, mimicking the customer-agent conversations in the real world. In addition, DialogLED can also process long text sequences, contrary to the 512-token limit of most transformer-based models (Devlin et al., 2019; Sanh et al., 2019; Liu et al., 2019; Lan et al., 2020). This makes DialogLED a suitable choice to build the AI Coach Assist system since the average length of the transcripts in our QA scorecard dataset is longer than 512 words. ## 5 Experiments In this section, we first present the experimental settings and the implementation details of our proposed model. Then we discuss our experimental findings in detail. ## 5.1 Implementation For the DialogLED model, we adopt the DialogLED-base4 model from the HuggingFace library (Wolf et al., 2020). Specifically, we used 4https://huggingface.co/MingZhong/ DialogLED-base-16384 the *LEDForSequenceClassification* which adds a classification head on top of the LED (LongformerEncoder-Decoder) model (Beltagy et al., 2020). We ran our experiments in GCP5 on an *n1-standard32* machine with 4 *Nvidia T4* GPUs. A total of 3 epochs were run, with the training batch size set to 2 6, and the maximum sequence length set to 1024. The learning rate was set to 2e − 5. For the DistilBERT model, we leverage its base model from HuggingFace7. We also set the learning rate for DistilBERT to 2e − 5 and ran 3 epochs with the training batch size set to 16 while the maximum sequence length set to 512. Note that for both models, these hyperparameters were tuned based on the performance in the validation set. The bestperforming method in the validation set was then used for evaluation on the test set. ## 5.2 Results & Discussions In this section, we first present the results of our base models. Then we conduct some ablation tests and also compare our proposed models with some classical machine learning baselines to further validate the effectiveness of our approach. Finally, we study the advantages and limitations of our model based on various question types. | Model | Precision | Recall | F1 | Accuracy | |------------|-------------|----------|-------|------------| | DialogLED | 67.92 | 63.72 | 65.76 | 70.52 | | DistilBERT | 62.53 | 58.39 | 60.39 | 66.25 | Table 3: Performance Comparisons between the AI Coach Assist models on our QA Scorecard dataset. studies to further investigate the effectiveness of our models. | Model | Precision | Accuracy | |------------------------|-------------|------------| | TF-IDF + SVM | 57.9 | 57.7 | | TF-IDF + Decision Tree | 58.0 | 60.8 | | TF-IDF + Random Forest | 59.3 | 60.1 | | TF-IDF + Naïve Bayes | 52.5 | 53.3 | | DialogLED | 67.9 | 70.5 | | DistilBERT | 62.5 | 66.3 | Table 4: Ablation Tests on the QA Scorecard dataset. | Model | Precision | Accuracy | |---------------------------------|-------------|------------| | DialogLED | 67.92 | 70.52 | | - without query | 57.76 | 62.83 | | - reduced sequence length = 512 | 66.01 | 67.52 | | - reduced sequence length = 256 | 63.15 | 63.64 | | DistilBERT | 62.53 | 66.25 | | - without query | 59.32 | 61.22 | ## 5.2.2 Ablation Studies 5.2.1 Performance Of The Base Models In this section, we compare the performance of using DialogLED and DistilBERT as the base model for the AI Coach Assist system. Though we consider *precision* and *accuracy* as the main criteria for the production deployment of this system, for this performance evaluation we also consider *recall* and f1 in addition to *precision* and *accuracy*. We observe from our results given in Table 3 that the DialogLED model outperforms its counterpart DistilBERT model in terms of all metrics (*precision, recall, f1, and accuracy*). The DialogLEDbased model also ensures scores above 60 in all 4 metrics. Moreover, in terms of accuracy and f1, it achieves a score of 70.52 and 65.76, respectively. Meanwhile, both models achieve comparatively lower recall scores, noticeably the DistilBERT model achieves a recall score even below 60. However, in our criteria for production deployment, a highly precise model is more important, with both DialogLED and DistilBERT achieving higher precision scores (67.92 and 62.53, respectively) in comparison to their recall scores (63.72 and 58.39, respectively). The superior performance using DialogLED over DistilBERT in all these metrics demonstrates the effectiveness of fine-tuning a language model for contact center telephone transcripts that is pretrained on dialog conversations. Moreover, since customer-agent conversations can also be quite long and may not fit within the 512 tokens limit of DistilBERT-like models (as shown in Table 1), the ability of DialogLED to process input text of larger size may also help it to achieve better performance. In the following section, we conduct some ablation In this section, we conduct some ablation studies to investigate our approach of concatenating the query and the transcript as input for our transformerbased language models (DialogLED/DistilBERT), as well as how the sequence length impacts the overall performance of DialogLED. We show the results from our ablation study in Table 4. For our first ablation test, we remove the query from the input text to better study the relationship between the query and the transcript. We find that for both models the accuracy is dropped by a great margin if the query is removed. The removal of the query from the input text leads to an accuracy drop of 10.90% for DialogLED and 8.03% for DistilBERT. In terms of precision, the performance is deteriorated by 14.96% and 5.13%, for DialogLED and DistilBERT, respectively. These findings demonstrate that the model learns to predict the *coachable* and *not coachable* moments in transcripts for the given query based on the concatenated representation of the query and the transcript. For our other ablation test, we reduce the input sequence length from our DialogLED model. We find that reducing the input sequence length from 1024 to 512 and 256 leads to a huge drop in accuracy (dropped by 4.71% and 9.76%, respectively) and precision (dropped by 2.81% and 7.02%, respectively). This demonstrates the effectiveness of using the DialogLED model which can process longer input sequences. Moreover, we observe that when the size of the input sequence length for DialogLED is 512 (same as DistilBERT), it still outperforms DistilBERT in terms of both accuracy and precision. This further gives an implication that the utilization of a model that is pre-trained on conversational data is more ![5_image_0.png](5_image_0.png) ## 5.2.3 Performance Against Other Baselines In this section, we compare our proposed models for the AI Coach Assist system: *DialogLED* and DistilBERT, with some baseline models to further study their effectiveness. Below, we describe the baseline models that we use for comparisons: TF-IDF with Classical Machine Learning Models as Baselines: We use TF-IDF as keywordbased features for some classical machine learning models, such as Support Vector Machine (SVM) (Hearst et al., 1998), Random Forest (Ho, 1995), Decision Tree (Rokach and Maimon, 2005), and Naïve Bayes (Webb et al., 2010), as our baseline models for comparisons. We show our experimental results in Table 5 to observe that both of our proposed models (the DialogLED model which obtains the highest accuracy and the DistilBERT model which ensures high efficiency) for AI Coach Assist outperform all TF-IDF feature-based classical machine learning approaches. On Average, the DistilBERT model and the DialogLED model outperform the baseline models by 8.97% and 12.48% in terms of precision, while 16.20% and 17.78% in terms of accuracy, respectively. ## 5.2.4 Performance Based On Question Types In this section, we conduct an in-depth analysis of the proposed models for AI Coach Assist: DialogLED and DistilBERT. For our analysis, we investigate their performance in different question types. In Figure 2, we show their accuracy on each question type. We observe that for most question types, DialogLED outperforms DistilBERT (the only exceptions are the following question types: Addressing Customer, *Behavioural*, and Customer Identification). Among the questions where DialogLED outperforms DistilBERT (7 out of 11), the highest performance gains are in question types that are of *Greeting* and *Account Verification*. For *Greeting*, it achieves the best accuracy with a score of 84.62, while for *Account Verification*, the accuracy is 83.33. Meanwhile, even though the DialogLED model achieves an accuracy of at least 60 for all question types, the DistilBERT model achieves quite low scores for some question types (e.g., only 57% scores for *Closing* type and *Customer Satisfaction* type questions). For the DialogLED model, it finds the *Behavioral* and the *Issue Resolution* type questions most challenging, as its accuracy drops below 70. Among these two question types, the Behavioral question type achieves the lowest accuracy score of 63.59, followed by *Issue Resolution*, with an accuracy of 64.0. ## 6 Usage In Real World Contact Centers In this section, we discuss how the AI Coach Assist system can be used in real-world contact centers. Since determining the calls that are coachable is not required in real-time, rather they are required after the call is over, the inference speed of the model may not be an issue in this regard. Moreover, for contact centers where the computing resource is not a problem, our DialogLED-based model could be used, as it achieves better accuracy than its Distil- BERT counterpart. Since the size of the trained DialogLED model is 648 MB, the DistilBERT model which takes only 263 MB could be used in scenarios where the computing resource is limited. We also prototype our proposed system for usage in a real contact center. Since directly predicting a score to a call might impact the evaluation of agent performance in contact centers, as such metrics could be used by managers for performance evaluation of agents, in our prototype we rather recommend a list of calls to the managers that are highly likely to contain coachable moments for a particular question type. Thus, instead of using those calls for a direct performance evaluation of agents, the managers still require to listen to the conversation or read the ASR-generated transcript. More particularly, using our proposed AI Coach Assist, we help the managers with a list of calls that they may use to manually grade agent performance, contrary to the existing methods of random call selection. In this way, the proposed prototype of AI Coach Assist may not cause any ethical concerns. ## 7 Conclusion In this paper, we presented *AI Coach Assist*, a transformer-based pairwise sentence classification model that combines the query/question given by the contact center manager or supervisor with the call transcript to determine which calls are most relevant for coaching purposes. The evaluation results demonstrate the potential of AI Coach Assist to transform the way contact centers coach their agents, providing an efficient and effective method that recommends calls that are the most relevant for coaching purposes. This will help to improve the coaching process and enhance the performance of contact center agents, leading to better customer satisfaction. Note that our model is intended to help contact center supervisors to be more effective in coaching their employees by improving over the random sampling of calls. The model does not automate the evaluation of employee performance by replacing human review. In the future, we will study how to improve the performance on the question types where the model performs poorly. We will also study how to utilize other question-answering models (Laskar et al., 2020, 2022d) or leverage generative large language models (OpenAI, 2023; Anil et al., 2023) that can point out the reason for a call being *coachable* and not coachable. ## Limitations As our models are trained on customer-agent conversations in English, they might not be suitable to be used in other domains, types of inputs (i.e written text), or languages. Moreover, as we demonstrated in the paper that the model has limitations in certain question types, the user needs to decide which question types to be used when deploying the system in production. Though the DialogLED model performs better, it requires higher computing resources. On the contrary, even though the DistilBERT model consumes lower memory, its performance is poorer than the DialogLED model. ## Ethics Statement - **Data Annotation:** Since the calls are annotated by real-world contact center managers/supervisors, we did not require any additional compensation for this annotation. Rather, we develop a system where the managers/supervisors put their scores for different call conversations in their contact centers. To map the questions to different question types, Labelbox8 was used for data annotation and the annotators were provided with adequate compensation (above minimum wages). - **Privacy:** There is a data retention policy available so that the call transcripts will not be used if the user does not give permission to use their call conversations for model training and evaluation. To protect user privacy, sensitive data such as personally identifiable information (e.g., credit card number, phone number) were removed while collecting the data. - **Intended Use by Customers:** Note that our model is intended to help contact center supervisors to be more effective in coaching their employees by improving over the random sampling of calls. The model does not automate the scoring of employee performance or replace human review. - **Prevention of Potential Misuses:** Below, we discuss some of the potential misuses of the system and our suggestions to mitigate them: (i) Automatic Performance Reviews of Agents by Considering all Recommended Calls as Bad Calls: One potential misuse of 8https://labelbox.com/ the system could be the evaluation of agent performance by considering all recommended calls as bad calls without any manual review of the call. To mitigate this, we can do the following: - Contact center supervisors that use this system must be properly instructed that this system does not determine whether an agent performs badly in a certain call. Rather, the intention of the proposed system is to only suggest a set of calls to the managers (instead of randomly selecting calls) that they need to manually review to determine whether the agent requires coaching or not. (ii) Considering Agents with More Recommended Calls as an Indicator to Poorer Agent Performance: Another potential misuse of the system is if contact center managers start considering that if more calls are recommended by our system for a particular agent, then the agent is more likely to perform poorly. To prevent this, we can do the following: - We may suggest some positive calls as well as negative calls to the managers. Here, positive calls are the ones that our system rates with a very high score and categorizes as not requiring any coaching. Whereas negative calls are the ones that our system rates with quite lower scores and classifies as coaching required. To avoid any misuse of the suggested calls, the proposed AI Coach Assist system should never reveal to the managers whether a call requires coaching or not. Rather it should only allow the managers to make the final decision on whether the call is a positive call or a negative call. Once the suggested calls are manually reviewed by the managers and categorized as positive by them, these calls can then be used to train other agents that require improvement in certain areas, whereas a call categorized as negative can be used to train a particular agent who did not perform well (i.e., requires coaching) in that specific call. - In addition, to avoid suggesting too many calls for the same agent, the system may suggest only a certain number of calls (not above a pre-defined limit) per agent to the managers. (iii) Using Bad Questions For Model Development: In some contact centers, there may be questions that are used for evaluating agent performance which may contain any potential biases toward a specific race or gender. We can prevent this in the following way: - The system should only respond to a preselected set of questions that were used during the training phase of the model. Any questions that may pose any ethical concerns or potential biases should not be used while training the model such that these questions can also be automatically ignored during the inference phase. - **License:** We maintained the licensing requirements accordingly while using different tools (e.g., HuggingFace). ## Acknowledgements We appreciate the reviewers for their excellent review comments that helped us to improve the quality of this paper. We would also like to thank Shayna Gardiner and **Elena Khasanova** for reviewing the ethical concern of the proposed system. ## References Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. *arXiv* preprint arXiv:2305.10403. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the* North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171– 4186. Xue-yong Fu, Cheng Chen, Md Tahmid Rahman Laskar, Shayna Gardiner, Pooja Hiranandani, and Shashi Bhushan Tn. 2022a. Entity-level sentiment analysis in contact center telephone conversations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 484–491, Abu Dhabi, UAE. Association for Computational Linguistics. Xue-Yong Fu, Cheng Chen, Md Tahmid Rahman Laskar, Shashi Bhushan Tn, and Simon CorstonOliver. 2022b. An effective, performant named entity recognition system for noisy business telephone conversation transcripts. In *Proceedings of the Eighth* Workshop on Noisy User-generated Text (W-NUT 2022), pages 96–100, Gyeongju, Republic of Korea. Association for Computational Linguistics. Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. 1998. Support vector machines. *IEEE Intelligent Systems and their* applications, 13(4):18–28. Tin Kam Ho. 1995. Random decision forests. In *Proceedings of 3rd international conference on document analysis and recognition*, volume 1, pages 278– 282. IEEE. Elena Khasanova, Pooja Hiranandani, Shayna Gardiner, Cheng Chen, Simon Corston-Oliver, and Xue-Yong Fu. 2022. Developing a production system for Purpose of Call detection in business phone conversations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, pages 259–267, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Md Tahmid Rahman Laskar, Cheng Chen, Xue-yong Fu, and Shashi Bhushan Tn. 2022a. Improving named entity recognition in telephone conversations via effective active learning with human in the loop. In Proceedings of the Fourth Workshop on Data Science with Human-in-the-Loop (Language Advances), pages 88–93, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Md Tahmid Rahman Laskar, Cheng Chen, Jonathan Johnston, Xue-Yong Fu, Shashi Bhushan TN, and Simon Corston-Oliver. 2022b. An auto encoder-based dimensionality reduction technique for efficient entity linking in business phone conversations. In *Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information* Retrieval, pages 3363–3367. Md Tahmid Rahman Laskar, Cheng Chen, Aliaksandr Martsinovich, Jonathan Johnston, Xue-Yong Fu, Shashi Bhushan Tn, and Simon Corston-Oliver. 2022c. BLINK with Elasticsearch for efficient entity linking in business conversations. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, pages 344–352, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. Md Tahmid Rahman Laskar, Enamul Hoque, and Jimmy Xiangji Huang. 2022d. Domain adaptation with pre-trained transformers for query-focused abstractive text summarization. *Computational Linguistics*, 48(2):279–320. Md Tahmid Rahman Laskar, Xiangji Huang, and Enamul Hoque. 2020. Contextualized embeddings based transformer encoder for sentence similarity modeling in answer selection task. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5505–5514. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. OpenAI. 2023. Gpt-4 technical report. Lior Rokach and Oded Maimon. 2005. Decision trees. Data mining and knowledge discovery handbook, pages 165–192. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. *CoRR*, abs/1910.01108. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Geoffrey I Webb, Eamonn Keogh, and Risto Miikkulainen. 2010. Naïve bayes. Encyclopedia of machine learning, 15:713–714. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45. Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2022. Dialoglm: Pre-trained model for long dialogue understanding and summarization. In *Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI* 2022 Virtual Event, February 22 - March 1, 2022, pages 11765–11773. AAAI Press.
zhou-etal-2023-unified
Unified Contextual Query Rewriting
https://aclanthology.org/2023.acl-industry.58
Query rewriting (QR) is an important technique for user friction (i.e. recovering ASR error or system error) reduction and contextual carryover (i.e. ellipsis and co-reference) in conversational AI systems. Recently, generation-based QR models have achieved promising results on these two tasks separately. Although these two tasks have many similarities such as they both use the previous dialogue along with the current request as model input, there is no unified model to solve them jointly. To this end, we propose a unified contextual query rewriting model that unifies QR for both reducing friction and contextual carryover purpose. Moreover, we involve multiple auxiliary tasks such as trigger prediction and NLU interpretation tasks to boost the performance of the rewrite. We leverage the text-to-text unified framework which uses independent tasks with weighted loss to account for task importance. Then we propose new unified multitask learning strategies including a sequential model which outputs one sentence for multi-tasks, and a hybrid model where some tasks are independent and some tasks are sequentially generated. Our experimental results demonstrate the effectiveness of the proposed unified learning methods.
# Unified Contextual Query Rewriting Yingxue Zhou1, Jie Hao1**, Mukund Rungta**2∗ , Yang Liu1, Eunah Cho1**, Xing Fan**1, Yanbin Lu1, Vishal Vasudevan1, Kellen Gillespie1, Zeynab Raeesy1**, Wei Shen**1, Chenlei Guo1**, Gokhan Tur**1 1 Amazon Alexa AI, 2 Georgia Institute of Technology 1{zyingxue,jieha,yangliud,eunahch,fanxing,luyanbin,vasuvish,kelleng raeesyzr,sawyersw, guochenl,gokhatur}@amazon.com, 2mrungta8@gatech.edu ## Abstract Query rewriting (QR) is an important technique for user friction reduction (i.e. recovering ASR error or system error) and contextual carryover (i.e. ellipsis and co-reference) in conversational AI systems. Recently, generation-based QR models have achieved promising results on these two tasks separately. Although these two tasks have many similarities such as they both use the previous dialogue along with the current request as model input, there is no unified model to solve them jointly. To this end, we propose a unified contextual query rewriting model that unifies QR for both reducing friction and contextual carryover purpose. Moreover, we involve multiple auxiliary tasks such as trigger prediction and NLU interpretation tasks to boost the performance of the rewrite. We leverage the text-to-text unified framework which uses independent tasks with weighted loss to account for task importance. Then we propose new unified multitask learning strategies including a sequential model which outputs one sentence for multi-tasks, and a hybrid model where some tasks are independent and some tasks are sequentially generated. Our experimental results demonstrate the effectiveness of the proposed unified learning methods. ## 1 Introduction Large-scale conversational AI agents such as Alexa, Siri, and Google Assistant, are becoming increasingly popular in real-world applications to assist users in daily life. However, some of the user interactions lead to dissatisfied experiences, where users do not get what they requested or the assistant has to engage with the user again to clarify the user request. These user frictions arise from errors in the system, including Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU), as well as user ambiguity and background ∗Work done while Mukund Rungta was interning at Amazon. noise. The goal of QR (Hao et al., 2022; Cho et al., 2021) is to identify the queries that lead to friction and rewrite them to queries without changing the users' intention, in order to mitigate defective interactions. Besides, in a multi-turn dialogue session with agent, users sometimes tend to use incomplete utterances which usually omit or refer back to entities or concepts that appeared in the previous dialogue, namely ellipsis, and co-reference. Thus, we also always rely on contextual carryover (Elgohary et al., 2019; Liu et al., 2020) to rewrite the incomplete query into a context-dependent and self-contained query. Although query rewriting has received a lot of attention in recent years, it has always been studied in two separate directions, i.e., reduce defects and contextual carryover. Recent works integrate all functionalities into pre-trained Sequence-to-Sequence (Seq2Seq) language models and report impressive results (Su et al., 2021; Raffel et al., 2020). This synergy has resulted in a great deal of recent work developing transfer learning methodology for NLP. Inspired by that, we propose a unified contextual query rewriting framework, that can utilize one model to perform the rewrite for both friction reduction and contextual carryover. Moreover, we involve two additional tasks in the unified model: NLU interpretation of rewrite and rewrite trigger prediction. NLU interpretation of rewrite is a task to predict the domain, intent, and entity slots information. Trigger prediction is a task that enables the model to decide to trigger a rewrite or not. These tasks can not only be used for downstream modules but also serve as auxiliary tasks to boost the primary task query rewriting performance. Specifically, we leverage the BART model (Lewis et al., 2020) which is a largescale Seq2Seq framework. We unify defect reduction and contextual carryover as one QR task, where the model input is a dialogue context along with the current request, and the output is the rewrite. For NLU interpretation and trigger prediction tasks, we also cast them as a text-to-text generation task, where the model output the trigger decision and the NLU interpretation for the current request. Motivated by the concept of in-context learning (Brown et al., 2020), to steer the model to solve different sub-task, we plug a task-specific prompt, into the model input. This way, the generations of different sub-tasks are decoupled, leading to better flexibility of the model regarding generation for each sub-tasks. Besides the traditional text-to-text unified learning approach (Raffel et al., 2020) in which each task's prediction is generated independently, we explored variants of unified learning approach, including sequential unified learning where one sequence is used to generate multi-task results using target prompts and hybrid unified learning where some tasks are independent and some tasks are sequentially generated. We conducted extensive offline experiments to study the proposed unified learning approaches. Our experimental results demonstrate the effectiveness of the proposed approaches. Our production simulation validates the positive impact of the proposed model which indeed generates rewrites of better quality. ## 2 Related Work Query Rewriting In dialogue systems, query rewriting benefits dialogue state tracking especially co-reference resolution (Rastogi et al., 2019; Hao et al., 2021), and reducing users' friction by replacing the users' utterance (Wang et al., 2021; Fan et al., 2021; Chen et al., 2023). Fan et al. (2021) and Cho et al. (2021) propose to leverage the search-based model, which consists of a DSSM based retrieval layer and a tree ranking layer, to handle global and personalized query rewriting. Su et al. (2019) use generation-based approaches to tackle the co-reference and omission-specific scenarios. Hao et al. (2022) propose a constrained generation based Seq2Seq model for query rewriting. Contextual Carryover Contextual carryover has been an important component in dialogue systems for resolving co-reference and omission. Naik et al. (2018) leverage an encoder-decoder architecture for making independent carryover decision for each slot in the context. Later, Chen et al. (2019) propose a framework to jointly predict whether a subset of related slots should be carried over from dialogue history. Rastogi et al. (2019) formulate the contextual carryover problem as a contextual query rewriting problem (CQR). Yu et al. (2020) present a few-shot generative approach to conversational query rewriting. In this work, we unify the query rewiring task with CQR by one text-to-text generation model, with generated rewrite handling contextual slot carryover cases. Unified Learning Transfer learning in natural language processing (NLP) has gained popularity due to its demonstrated effectiveness. This approach involves pre-training a model on a data-rich task and fine-tuning it for a specific task (Dong et al., 2019; Radford et al., 2018; Lewis et al., 2020). Later, the efficacy of transfer learning has been further improved by a unified framework that converts all text-based language problems into a text-to-text format presented through the T5 model (Raffel et al., 2020). In this paradigm, instead of adapting the pre-trained language model (LM) to downstream tasks via objective engineering, downstream tasks are reformulated to look more like those solved during the original LM training with the help of a textual prompt. ## 3 Methodology Before introducing the unified learning model, we first establish the baseline for the query rewriting. We formulate the query rewriting as a sequenceto-sequence (Seq2Seq) task and fine-tune a pretrained BART model for the rewrite generation. As shown in Figure 1, the model takes the current turn and its previous dialog context as input and generates the target text autoregressively. The Seq2Seq architecture comprises a bidirectional encoder that takes the context and current request as input, and an autoregressive decoder that performs constrained decoding to generate the target rewrite. ## 3.1 Generation Based Query Rewrite Query rewrite with contextual carryover. In addition to rewriting defective queries such as "play night talk by drake" to "play knife talk by drake", we consider contextual carryover task as a query rewriting task as well. For example, in a multiturn dialog, we have "[USER] what's the current temperature at Colorado Springs [AGENT] Right now, it's 46 degrees Fahrenheit. Today, expect a high of 75 degrees. [USER] what's the air quality", where location slot "Colorado Springs" needs to be a carryover to current turn "[USER] what's the Figure 1: Illustration of the seq2seq model query rewriting model. When a new utterance arrives, the model takes ![2_image_0.png](2_image_0.png) the flattened contextual input and outputs as the final rewrite. We use [USER] as a special symbol added in front of the user turn, and [AGENT] as a special symbol added in front of the agent turn. air quality". In such cases, we directly address the carryover problem by formulating it as a rewriting task, with the goal of generating the rewrite "What's the air quality in Colorado Springs." We adopt the pre-trained BART (Lewis et al., 2020) which has the same model architecture as the widely-used Transformer model (Vaswani et al., 2017) and is pre-trained with a denoising way (Devlin et al., 2018). As illustrated in Figure 1, we flatten the previous dialogue turns (including both user requests and agent responses) and the current user request into a single sequence for input to the encoder. Then, we fine-tune BART for our task. Formally, given a contextual request sequence q = {q1*, ..., q*M}, where qi for i ∈ {1*, ..., M*} denotes a token in the sequence, and the corresponding rewrite r = {r1*, ..., r*N }. The encoder is responsible for reading the input request and its previous dialogue turns, and the decoder autoregressively generates the rewrites. The ultimate goal of the rewrite generation problem is to learn a probability distribution pθ(r) over the variable-length text sequence r, where θ is the parameter of the BART. Typically, the maximum likelihood estimation (MLE) objective is used to train the language model which is defined as $${\mathcal{L}}_{\theta}(\mathbf{q},\mathbf{r})=-{\frac{1}{|\mathbf{r}|}}\sum_{j=i}^{|\mathbf{r}|}\log p_{\theta}(r_{j}|\mathbf{r}_{<j})\;.$$ Typically, given finite training examples, i.e., T pairs of contextual query and rewrite S = {qt, rt} T t=1, the model is trained by minimizing the empirical finite sample objective loss function Lθ(S) = 1 T PT t=1 Lθ(qt, rt). ## 3.2 Other Tasks We first introduce the tasks we consider in this paper as follows. NLU hypothesis generation task. In conversational AI systems, interpreting the user's query, such as their intent and domain, helps downstream modules to respond more effectively to the user's request. Integrating this interpretation task into the model can improve its understanding of the user's request and context. The results of the NLU interpretation can not only be utilized by downstream modules but also act as regularizers for the primary query rewriting task. To integrate the NLU interpretation task, we let the model generate such interpretation as an NLU hypothesis that takes the form of "domain | intent | slot_type:slot_value". For example, given the query "play bad blood by taylor swift", the corresponding NLU hypothesis would be "Music | PlayMusic | SongName:bad blood | ArtistName:taylor swift". The NLU hypothesis generation task takes the query as input and generates the corresponding NLU hypothesis. Trigger prediction task. The trigger task allows the model to predict whether a rewrite (or contextual carryover) is necessary for the incoming query. For example, if the query "play bad blood by Taylor Swift" is not defective, the model should not trigger a rewrite. Typically, separate and independent models are used to make this binary decision. However, in our unified model, we formulate this binary prediction problem as a text generation problem. Queries that do not require a rewrite have a target output of "no trigger", while defective queries have a target output of "trigger". Integrating trigger tasks in the unified generation model can save the resources for having separate trigger models. ## 3.3 Parallel Unified Learning Model Figure 2 illustrates the design of parallel unified learning which is similar to T5 (Raffel et al., 2020). We fine-tune the BART model on the above three tasks. The rationale behind unifying multi-tasks in training is that by successfully predicting the system's interpretation of a request (i.e., domain, intent, and slots in the NLU hypothesis) and its trigger decision, the model can improve its prediction of the trigger task and subsequent rewrite. As shown in Figure 2, we add the prompt "predict trigger:" to the contextual query as input: "**predict** trigger: *turn brightness down to one percent"* and ![3_image_0.png](3_image_0.png) the target output of trigger prediction task is *"trigger"*. For the NLU interpretation task, the model takes "**predict hypothesis:** *turn brightness down* to one percent" as input and generates the corresponding hypothesis. For the rewriting task, we add prompt "**generate rewrite**" to the input. To account for the varying importance of each task, we incorporate a weighted loss in our multi-task unified training approach. Given the incoming contextual query q, and target rewrite r, target NLU hypothesis h and target trigger prediction g, we use weighted loss as follows $$\begin{array}{c}{{{\mathcal L}_{\theta}({\bf q},{\bf r},{\bf h},{\bf g})=\lambda_{1}\;{\mathcal L}_{\theta}({\bf q},{\bf r})+}}\\ {{\qquad\qquad\lambda_{2}\;{\mathcal L}_{\theta}({\bf q},{\bf h})+\lambda_{3}\;{\mathcal L}_{\theta}({\bf q},{\bf g})\;,}}\end{array}$$ where λ1, λ2, and λ3 are the weights for rewrite, NLU, and trigger tasks separately. ## 3.4 Sequential Unified Learning Model In this section, we propose a novel unified learning approach by leveraging a single (i.e. text generation) task of text generation with multiple prompts. Our model encodes the current request and its previous dialog context and then generates a sequential output that predicts various tasks. To guide the predictions, we use markup tokens such as "[rewrite]", "[trigger]", and "[hypothesis]" to prompt the prediction. This approach leverages the benefits of conditional generation, allowing the model to consider the previous task's prediction when performing the next task's prediction. As a result, the order of task generation is important. We also consider the same three tasks for training this model: NLU hypothesis task, trigger task, and rewrite generation task. The model generates the prediction for each task in a single sequence with the order: rewrite → trigger → hypothesis. Figure 3 illustrates the idea of this sequential multi-task unified learning model. ## 3.5 Hybrid Unified Learning Model Parallel and sequential unified learning approaches both have advantages and disadvantages. The parallel multi-task approach trains each task independently for a given query, which requires duplicating training data by the number of tasks and leads to a longer training cycle. As the number of tasks increases, the size of the training data also increases, making it challenging to add more tasks. Besides, each task is trained independently, the model cannot leverage the correlation between tasks. On the other hand, the sequential unified approach does not require duplicate data since the model nests multitasks into one sequential output, reducing the training cost. This model has the potential to learn and leverage the correlation between tasks. However, the sequential model has higher latency due to the longer decoding length compared with the parallel multi-task model. In addition, it is hard to apply weighted loss for sequential multi-tasks. To address these issues and combine the benefits of both models, we proposed a hybrid unified learning model as shown in Figure 4. The hybrid model considers the rewrite generation and trigger prediction as a nested sequential task, with the prompt "generate_rewrite_trigger:". The NLU hypothesis generation is treated as an independent parallel task, with the prompt "predict hypothesis:". This hybrid model reduces the duplication of training data, reduces the training cost, and leverages the correlation between tasks. We also apply weighted loss training loss to reflect the importance of tasks: $${\mathcal{L}}_{\theta}(\mathbf{q},\mathbf{r},\mathbf{h},\mathbf{g})=\lambda_{1}\ {\mathcal{L}}_{\theta}(\mathbf{q},\{\mathbf{r},\mathbf{g}\})+\lambda_{2}\ {\mathcal{L}}_{\theta}(\mathbf{q},\mathbf{h})$$ where λ1, λ2 are the weights for nested rewrite_trigger task and NLU tasks separately. Figure 3: Illustration of the sequential unified learning model. The model sequentially generate a single output of ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ![4_image_2.png](4_image_2.png) different tasks. We have use special tokens (e.g., "[rewrite]", "[trigger]", "[hypothesis]") to prompt the prediction for different tasks. ## 4 Experiments We conduct two sets of experiments to evaluate the proposed unified learning models. The first set of experiments compares the three proposed unified models in terms of their effectiveness on the query rewrite, trigger prediction, and NLU hypothesis generation. The second set of experiments evaluates the benefit of integrating contextual carryover into the query rewriting task. ## 4.1 Experimental Setup Datasets We use two datasets for our experiments: the query rewrite dataset (QueryR) and the contextual carryover rewrite dataset (CarryoverR). The QueryR dataset is weakly annotated by a defect detection model to identify consecutive user utterances where the first turn was defective but the second turn was successful. We also collect 1M non-defective quires which do not need to be rewritten/triggered (i.e., trigger task label is "no trigger"). The ContextCarryoverR dataset is human-annotated for contextual carryover queries. In this dataset, we have 1M queries need carryover and 1M queries do not need carryover (i.e., trigger task label is "no trigger"). We collect 1-month period data for training and validation (randomly split by 9:1) and subsequent 1-week period data testing. Table 1 provides the information of each dataset. Note there is no overlap between QueryR and CarryoverR, i.e., QueryR does not have contextual carryover queries and CarryoverR only has contextual carryover queries. Note that all the data has been de-identified. Table 1: Statistics of the query rewriting data sets. | Datasets | Trigger label | Train | Test | |------------|-----------------|---------|--------| | QueryR | trigger | 7M | 200k | | no trigger | 1M | 200k | | | CarryoverR | trigger | 1M | 908 | | no trigger | 1M | 4340 | | Model Setup In the first set of experiments, we only focus on the QueryR dataset which does not have any contextual carryover queries. We train the parallel, *sequential*, and *hybrid* unified models on QueryR dataset by fine-tuning the BARTbase model, which has 140M parameters, following must-task learning approaches in Sections 3.3, 3.4 and 3.5. We compared the proposed unified learning with the baseline CGF (Hao et al., 2022) which only consider query rewrite task. In the second set of experiments, explore the advantages of unifying query rewrite with contextual carryover. Thus we train the BART-base trained on CarryoverR dataset for rewrite generation as the baseline (name this baseline as BART_CR). We also have another baseline that we combine CarryoverR and QueryR datasets and train the BART-base on the combined dataset for rewrite generation (name this baseline as BART_CR_QR). For the proposed unified method, we train the hybrid unified model on the combined dataset using rewrite, trigger, and NLU tasks (name this unified model as Hybrid_CR_QR). Evaluation Metrics. In practice, the query rewriting system is not expected to rewrite or trig- | Task | Rewrite | Trigger | NLU | |------------|-----------|-----------|--------| | precision | F1 | precision | | | CGF | 78.44% | NA | NA | | Parallel | 79.01% | 0.89 | 68.59% | | Sequential | 65.67% | 0.90 | 66.11% | | Hybrid | 79.98% | 0.91 | 67.78% | | Dataset | QueryR | CarryoverR | |--------------|----------|--------------| | BART_CR | NA | 68.51% | | BART_CR_QR | 78.62% | 72.37% | | Hyrbid_CR_QR | 80.21% | 78.45% | ger every query from the users, taking into account cases where the query itself may not be defective or need a contextual carryover. Thus, To evaluate rewrite and NLU hypothesis quality, we use utterance-level precision at a fixed *trigger rate*, i.e., 20% trigger rate. The utterance level precision denotes how often the triggered rewrite matches the correct rewrite. We use the F1 score as the trigger task evaluation metric. For CarryoverR test data, we also use *hallucination* as metrics. A rewrite is considered hallucinated if it contains entities that are not present in the target utterance. We also evaluate *intrinsic* hallucination (when the hallucinated entities are present in the input) and *extrinsic* hallucination (when the hallucinated entities are not present in the input). ## 4.2 Experimental Results Unified models on QueryR For parallel model and hybrid model, we have explored the weights Table 4: Hallucination rate (intrinsic and extrinsic hallucination rates) of baselines and Hybrid model. The Hybrid model achieves the lowest hallucination rate. | Hallucination | Intrinsic | Extrinsic | | |-----------------|-------------|-------------|--------| | BART_CR | 46.23% | 30.42% | 20.79% | | BART_CR_QR | 44.69% | 31.82% | 18.14% | | Hyrbid_CR_QR | 39.71% | 27.78% | 17.11% | for different tasks. By conducting a grid search, we identify the optimal choice of weight for parallel model is 0.7, 0.1, 0.2 for rewrite, trigger and NLU tasks. For hybrid model the optimal choice of weight is 0.8, 0.2 for rewrite-trigger task and NLU task. Table 2 presents the results of the rewrite generation precision at a 20% trigger rate on the QueryR test set, as well as the F1 score for the trigger task. The Hybrid model outperforms the other models by achieving the best precision in the rewriting task. The sequential model has a performance regression due to its longer decoding sequence when adding the hypothesis task. In the hybrid model, the trigger task output is conditioned at the rewriting task, which can explain the higher F1 score for the Hybrid model compared with the Parallel model where the trigger task is learned independently. Overall, the Hybrid model is favored in terms of good rewrite performance, trigger performance, and shorter decoding time. Unified model on QueryR and CarryoverR Table 3 shows the precision at 20% trigger rate of the unified model on QueryR and CarryoverR test sets. The results of BART_CR and BART_CR_QR indicate that unifying the contextual carryover task with the query rewrite task can improve the carryover performance, even under a single-task training approach. The Hybrid model achieved the highest precision, demonstrating further improvement through multi-task learning. Table 4 displays the hallucination rates, including intrinsic and extrinsic hallucination rates, on the CarryoverR test set. The results indicate that the Hybrid model has the lowest hallucination rates. Production simulation We also conduct the production simulation of the proposed Hybrid model (Hybrid_CR_QR). We gather one-week live traffic data from our production system and input the data into the model. We compare the proposed model with the no-unified model rewrites within the English-speaking user's environment. We use one primary metric to evaluate the rewrite performance: defect rate, which is calculated as the number of defective rewritten utterances, divided by the total number of rewritten utterances. We use the defect detection model in Gupta et al. (2021) to measure if an utterance is defective. In the analysis, we observe a 16.65% reduction in the defect rate and an increase of millions of new rewrites per week. Table 5 provides examples showing the defect reduction case USER: put satellite ho by monica AGENT: I couldn't find satellite ho by Monica, but here is other music by Monica . USER: i said sideline hope by monica Baseline rewrite: play sideline hope by monica Unified model rewrite: play sideline ho by monica contextual carryover case USER: who's the tallest man in the world AGENT: Sultan Kosen is the tallest man alive. The tallest man across history is Robert Wadlowski. USER: how tall is it Baseline rewrite: how tall is sultan kosen Unified model rewrite: how tall is robert wadlowski Table 5: Production examples of Hybrid Unified model and baseline. effusiveness of the unified model. ## 4.3 Limitations We acknowledge that there are certain limitations of this framework. First, generation-based models have latency issue due to the autoregressive generation. Thus, we will explore Non-autoregressive and semi-autoregressive methods in a future study. Second, the knowledge is only stored in model parameters which limits the capacity of the model to make the smarter trigger decision through factchecking and generate a valid rewrite. To this end, we intend to consider a retrieval-augmented generation to incorporate external knowledge to improve performance as well as incorporating more contextual (e.g. if the user is listening music) and personalized (e.g. user preference) signals into the model. Moreover, generative models can also pose quality control challenges, such as hallucinations. To mitigate this issue, we will add constrained decoding (Hao et al., 2022) to control hallucinations. ## 5 Conclusion In this work, we propose unified learning approaches for QR. The proposed approach unifies several tasks into one text-to-text model. Besides, the proposed approach unifies general rewrite tasks with contextual carryover tasks. We explored multiple unified learning scenarios such as parallel multitask learning, sequential multi-task learning, and hybrid multi-task learning. Our experimental results and production simulation demonstrated the superiority of the unified learning model. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. volume 33, pages 1877–1901. Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, and Lambert Mathias. 2019. Improving long distance slot carryover in spoken dialogue systems. In *Proceedings of the First Workshop on NLP for* Conversational AI, pages 96–105. Zheng Chen, Ziyan Jiang, and Fan Yang. 2023. Graph meets llm: A novel approach to collaborative filtering for robust conversational understanding. arXiv:2305.14449. Eunah Cho, Ziyan Jiang, Jie Hao, Zheng Chen, Saurabh Gupta, Xing Fan, and Chenlei Guo. 2021. Personalized search-based query rewrite system for conversational ai. In *Proceedings of the 3rd Workshop on* Natural Language Processing for Conversational AI, pages 179–188. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. *Advances in neural information processing systems*. Ahmed Elgohary, Denis Peskov, and Jordan BoydGraber. 2019. Can you unpack that? learning to rewrite questions-in-context. In *EMNLP*. Xing Fan, Eunah Cho, Xiaojiang Huang, and Chenlei Guo. 2021. Search based self-learning query rewrite system in conversational ai. In *2nd International* Workshop on Data-Efficient Machine Learning (DeMaL). Saurabh Gupta, Xing Fan, Derek Liu, Benjamin Yao, Yuan Ling, Kun Zhou, Tuan-Hung Pham, and Chenlei Guo. 2021. Robertaiq: An efficient framework for automatic interaction quality estimation of dialogue systems. In *2nd International Workshop on* Data-Efficient Machine Learning (DeMaL). Jie Hao, Yang Liu, Xing Fan, Saurabh Gupta, Saleh Soltan, Rakesh Chada, Pradeep Natarajan, Chenlei Guo, and Gökhan Tür. 2022. Cgf: Constrained generation framework for query rewriting in conversational ai. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing:* Industry Track, pages 475–483. Jie Hao, Linfeng Song, Liwei Wang, Kun Xu, Zhaopeng Tu, and Dong Yu. 2021. Rast: Domain-robust dialogue rewriting as sequence tagging. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4913–4924. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Qian Liu, Bei Chen, Jian-Guang Lou, Bin Zhou, and Dongmei Zhang. 2020. Incomplete utterance rewriting as semantic segmentation. In *EMNLP*. Chetan Naik, Arpit Gupta, Hancheng Ge, Mathias Lambert, and Ruhi Sarikaya. 2018. Contextual slot carryover for disparate schemas. *Proc. Interspeech 2018*, pages 596–600. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *The Journal of Machine Learning Research*, 21(1):5485–5551. Pushpendre Rastogi, AI Alexa, Arpit Gupta, and Tongfei Chen. 2019. Scaling multi-domain dialogue state tracking via query reformulation. In *Proceedings of NAACL-HLT*, pages 97–105. Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance rewriter. arXiv preprint arXiv:1906.07004. Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2021. Multi-task pre-training for plug-and-play task-oriented dialogue system. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc. Zhuoyi Wang, Saurabh Gupta, Jie Hao, Xing Fan, Dingcheng Li, Alexander Hanbo Li, and Chenlei Guo. 2021. Contextual rephrase detection for reducing friction in dialogue systems. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1899–1905. Shi Yu, Jiahua Liu, Jingqin Yang, Chenyan Xiong, Paul Bennett, Jianfeng Gao, and Zhiyuan Liu. 2020. Fewshot generative conversational query rewriting. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 1933–1936.
zuo-etal-2023-context
Context-Aware Query Rewriting for Improving Users{'} Search Experience on {E}-commerce Websites
https://aclanthology.org/2023.acl-industry.59
E-commerce queries are often short and ambiguous. Consequently, query understanding often uses query rewriting to disambiguate user-input queries. While using e-commerce search tools, users tend to enter multiple searches, which we call context, before purchasing. These history searches contain contextual insights about users{'} true shopping intents. Therefore, modeling such contextual information is critical to a better query rewriting model. However, existing query rewriting models ignore users{'} history behaviors and consider only the instant search query, which is often a short string offering limited information about the true shopping intent. We propose an end-to-end context-aware query rewriting model to bridge this gap, which takes the search context into account. Specifically, our model builds a session graph using the history search queries and their contained words. We then employ a graph attention mechanism that models cross-query relations and computes contextual information of the session. The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network. The session representations are then decoded to generate rewritten queries. Empirically, we demonstrate the superiority of our method to state-of-the-art approaches under various metrics.
# Context-Aware Query Rewriting For Improving Users' Search Experience On E-Commerce Websites Simiao Zuo⋄, Qingyu Yin∗†, Haoming Jiang†, Shaohui Xi†**, Bing Yin**†, Chao Zhang⋄and **Tuo Zhao**⋄ {simiaozuo, chaozhang, tourzhao}@gatech.edu {qingyy, jhaoming, shaohux, alexbyin}@amazon.com ⋄Georgia Institute of Technology †Amazon ## Abstract E-commerce queries are often short and ambiguous. Consequently, query understanding often uses query rewriting to disambiguate userinput queries. While using e-commerce search tools, users tend to enter multiple searches, which we call context, before purchasing. These history searches contain contextual insights about users' true shopping intents. Therefore, modeling such contextual information is critical to a better query rewriting model. However, existing query rewriting models ignore users' history behaviors and consider only the instant search query, which is often a short string offering limited information about the true shopping intent. We propose an end-to-end context-aware query rewriting model to bridge this gap, which takes the search context into account. Specifically, our model builds a session graph using the history search queries and their contained words. We then employ a graph attention mechanism that models cross-query relations and computes contextual information of the session. The model subsequently calculates session representations by combining the contextual information with the instant search query using an aggregation network. The session representations are then decoded to generate rewritten queries. Empirically, we demonstrate the superiority of our method to state-ofthe-art approaches under various metrics. ## 1 Introduction Query rewriting is a task where a user inputs a potentially problematic query (e.g., typos or insufficient information), and we rewrite it to a new one that better matches the user's real shopping intent. This task plays an important role in e-commerce query understanding, where without proper rewriting, search engines often return undesired items, rendering the search experience unsatisfactory. One major issue that impedes query rewriting is the ambiguity of queries. For example, Figure 1 ∗Correspondence to Qingyu Yin (qingyy@amazon.com). ![0_image_0.png](0_image_0.png) (left) demonstrates searching for "bumblebee costumes" without considering search context. From the query alone, it is implausible to tell if the user's intent is for costumes of actual bumblebee (i.e., the animal) or the character from the movie franchise. This type of ambiguity is common in ecommerce search, where queries are usually short (only 2-3 terms) and insufficiently informative (He et al., 2016b). Therefore, it is not possible to disambiguate queries using only the instant search. A common solution is to use statistical rules to differentiate the possible choices. Specifically, in our example, suppose a total of 100 users entered the "bumblebee costumes" query, and 70 of them eventually purchased the movie character costume. When a new user searches for the same query, the recommended products will consist of 70% movie character costumes and 30% animal costumes. This procedure is problematic because each user has a specific intent, i.e., either the movie character costume or the animal costume, but rarely both, which the aforementioned method fails to address. We propose to explore contextual information from users' history searches to resolve the query ambiguity issue. Taking the "bumblebee costumes" example again, in Figure 1 (right), suppose a rewriting model recognizes that the user searched for "Transformers movie" earlier, then it could infer 616 that the user's purchase intent is the movie character costume, and hence can remove the input ambiguity. There have been existing works that utilize search logs for query rewriting. For example, Wang and Zhai (2007, 2008) use traditional TF-IDF-based similarity metrics to capture relational information among the user's history searches. These approaches are too restrictive to handle the increasingly complex corpus nowadays. As such, the rewritten queries significantly differ from the original one in intent. More recently, neural network-based query rewriting algorithms (He et al., 2016b; Xiao et al., 2019; Yang et al., 2019) are proposed. Most of such approaches employ a multi-stage training approach. Consequently, they involve complicated hand-crafted features or require excessive human annotations for the intermediate features (sometimes both). To overcome the drawbacks of existing methods, we propose an end-to-end context-aware query rewriting algorithm. Our model's backbone is the Transformer (Vaswani et al., 2017). In our contextaware model, the Transformer encoder learns representations for individual history queries. The representations are further transformed to carry crossquery relational information using a graph attention mechanism (GAT, Velickovic et al. 2018). The GAT computes contextual information of a session based on a session graph, where its nodes contain the history queries and the tokens contained in the history queries. After obtaining the contextual information from the GAT, it is aggregated with the instant search using an aggregation network. The augmented information is subsequently fed into the Transformer decoder to generate rewritten queries. Our proposed method improves upon existing works from three aspects. First, our model does not involve recursion, unlike conventional recurrent neural network-based approaches (He et al., 2016b; Yang et al., 2019; Xiao et al., 2019). This facilitates training deep models containing dozens of layers capable of capturing high-order information. Second, our end-to-end sequence-to-sequence learning formulation eliminates the necessity of excessive labeled data. Previous approaches (Yang et al., 2019; Xiao et al., 2019) require the judgment of "semantic similarity", and thus crave for human annotations, which are expensive to obtain. In contrast, our method uses search logs as supervision, which does not involve human effort, and are cheap to acquire. Third, our method can leverage powerful pre-trained language models, such as BART (Lewis et al., 2020). Such models contain rich semantic information and are successful in numerous natural language processing tasks (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019). We demonstrate the effectiveness of our method on in-house data from an online shopping platform. Our context-aware query rewriting model outperforms various baselines by large margins. Notably, comparing with the best baseline method (Transformer-based model), our model achieves 11.6% improvement under the MRR (Mean Reciprocal Rank) metric and 20.1% improvement under the HIT@16 metric (a hit rate metric). We further verify the effectiveness of our approach by conducting online A/B tests. ## 2 Related Works One line of work uses statistical methods. For example, Cui et al. (2002, 2003) extract probabilistic correlations between the search queries and the product descriptions. Other works extract features that are related to the user's current search (Huang et al., 2003; Huang and Efthimiadis, 2009), or from relational information among the user's history searches (Billerbeck et al., 2003; Baeza-Yates and Tiberi, 2007; Wang and Zhai, 2007; Cao et al., 2008; Wang and Zhai, 2008). There are also statistical machine translation-based models (Riezler et al., 2007; Riezler and Liu, 2010) that employ sequence-to-sequence approaches. The aforementioned statistical methods suffer from unreliable extracted features, such that the rewritten queries differ from the original one in intent. Another line of work focuses on neural query rewriting models (He et al., 2016b; Xiao et al., 2019; Yang et al., 2019). These models adopt recurrent neural networks (RNNs, Hochreiter and Schmidhuber 1997; Sutskever et al. 2014) to learn a vectorized representation for the user's search query, after which KNN-based methods are used to find queries that yield similar representations. One major limitation is that the rewritten queries are limited to the previously presented ones. Also, these methods often involve complicated and ungrounded feature function designs, e.g., He et al. (2016b) and Xiao et al. (2019) hand-crafted 18 feature functions, or require excessive labeled data (Yang et al., 2019). Other works (Sordoni et al., 2015; Dehghani et al., 2017; Jiang and Wang, 2018) use RNNs for generative query suggestion, but they inherit the weaknesses of RNNs and yield unsatisfactory performance in practice. Note that Grbovic et al. (2015) construct contextaware query embeddings using word2vec (Mikolov et al., 2013). In their approach, an embedding is learned for each distinct query in the dataset. As such, the quality of the learned embeddings rely heavily on the number of occurrences of each query. This method is not applicable to our case because in our dataset, almost all the queries are distinct. ## 3 Problem Setup The session data are collected from search logs. First, we collect all the searches from a specific user within a time window, and we call the searches a "session". After the user purchases a product, the session ends, i.e., we do not consider subsequent queries and behaviors after a purchase happens. This is because after a purchase, the user's intent often change. Note that different sessions may be collected from different users. Each session contains multiple searches from the same user. We call the last query in the session the "target" query, the second to the last query the "source" (or the "instance) query, and the others the "history" queries. The intuition behind this is that because sessions always end with a purchase, the last search (i.e., the target) reflects the user's real intent. When the user enters the second to the last search (i.e., the source), if we can rewrite it to the target query, the user's intent will be fulfilled. Below is an example of a search session. From the history queries, the user is interested in car related banners/posters. The source query contains a typo ("doger" is a baseball team) and we should rewrite it to the target query ("dodge posters"). ## History: {Dodge Banners; Mopar Poster} Source (Instance): Dodger Posters Target: Dodge Posters We collect about 3 million (M) sessions, where each session consists of at least 3 history queries, a source query (i.e., the one we need to rewrite), and a target query (i.e., the ground-truth query that is associated with the purchase). We have roughly 18.7M queries, and on average, each session contains 4 history queries. Query rewriting is consequently formulated as a sequence-to-sequence learning problem. We highlight that per our formulation, we do not need human annotations, unlike existing approaches. ## 4 Method Figure 2 illustrates our context-aware query rewriting model. The model contains four parts: a conventional Transformer (Vaswani et al., 2017) encoder, a graph attention mechanism (Velickovic ![2_image_0.png](2_image_0.png) et al., 2018) that captures the user's purchase intent, an aggregation network that encodes the history searches, and a conventional Transformer decoder that generates the rewritten query candidates. ## 4.1 Transformer Encoder For a given source query, we first pad it with a <boq> (begin-of-query) token. Then, we pass the padded query through a Transformer encoder, after which we have its hidden representation Hs ∈ R Ls×d. Here Ls is the length of the padded query, and d is the hidden dimension. We also pass all the history queries corresponding to this source query through the encoder, and we have the history query representation Uh ∈ R Nh×Lh×d, where Nh is the number of history queries and Lh is the padded length. More details are presented in Appendix A. ## 4.2 Contextual Information From Session Graphs After we obtain the history query representations Uh, the next step is to refine them. Such refinement is necessary because the Transformer encoder considers the history queries separately, such that their interactions are not taken into account. However, since each search depends on its previous searches in the same session, modeling cross-query relations are imperative for determining the user's purchase ![3_image_0.png](3_image_0.png) intent. To this end, we use a graph attention mechanism (Velickovic et al., 2018; Wang et al., 2020) to capture contextual information from Uh. ## 4.2.1 Session Graph Construction First we specify how to build a graph for each session, which we call the session graph. Suppose we have a session that contains three history queries: $$\begin{array}{l}{{Q_{1}:\left\{\mathbf{Search\;query:}T_{1},T_{3}\right\},}}\\ {{Q_{2}:\left\{\mathbf{Search\;query:}T_{1},T_{2},T_{3}\right\},}}\\ {{Q_{3}:\left\{\mathbf{Search\;query:}T_{1},T_{2},T_{4},T_{5}\right\},}}\end{array}\tag{1}$$ where Q1, Q2, Q3 are the three queries, and T1, · · · , T5 are the five tokens that appear in the three queries. Recall Section 3 for the problem setup. Figure 3 (left) illustrates the session graph. ## 4.2.2 Node Representations The next step is to refine the node representations. Each of the nodes in the session graph has its own representation. The token representations are simply the corresponding representations of the tokens, extracted from the token embedding matrix. The query representations are the representations of the <boq> token in each padded history query, i.e., the representation of the Q1 query node in Figure 3 is found by Uh[0, 0, :] ∈ R d. Denote Gq = {qi} Nq i=1 and Gt = {ti} Nt i=1 the sets of representations for the query and token nodes, respectively. Here Nq is the number of query nodes and Ntis the number of token nodes. Note that all the node representations have the same size, i.e., qi, ti ∈ R d. ## 4.2.3 Update Node Representations We use a multi-head graph attention mechanism to update the node representations. For simplicity, denote Ng = Nq +Ntthe number of distinct nodes in the session graph, and G = Gq ∪ Gt = {gi} Ng i=1 the set of all the node representations. With the above notations, a single-head graph attention mechanism is defined as $$\begin{array}{l}{{h_{i}=g_{i}+\mathrm{ELU}\left(\sum_{j\in{\cal N}_{i}}\alpha_{i j}W_{v}g_{j}\right),}}\\ {{\mathrm{where}\;\alpha_{i j}=\frac{\exp(z_{i j})}{\sum_{\ell\in{\cal N}_{i}}\exp(z_{i\ell})},}}\\ {{z_{i j}=\mathrm{LeakyReLU}\left(W_{a}[W_{q}g_{i};W_{k}g_{j}]\right).}}\end{array}$$ Here ELU(x) = x · 1{x > 0} + (exp(x) − 1) · 1{x ≤ 0} is the exponential linear unit, Ni denotes the neighbor of the i-th node, and Wa, Wq, Wk, Wv are trainable weights. Note that a residual connection (He et al., 2016a) is added to the last equation in Eq. 2. This has proven to be an effective technique to prevent gradient vanishing, and hence, to stabilize training. The session graph only induces attention between nodes that are connected. For example, in Figure 3 (right), the model updates Q1 and Q2 using T3, while Q3 is unchanged, i.e., NT3 = {Q1, Q2}. A multi-head graph attention mechanism is then defined as the concatenation of [h 1 i , h2 i , · · · , hK i ], where K is the number of heads, and each of the hiis calculated via Eq. 2. The token node representations and the query node representations are updated iteratively. First, we update the token representations (Gt) using the query representations (Gq), in order that the tokens acknowledge to which queries they belong. Then, Gq is re-computed using the updated version of Gt, which essentially evaluates cross-query relations, using the token nodes as intermediaries. Note that the graph attention mechanism (GAT) used in each of the two steps are distinct, i.e., there are two different sets of weights [Wa, Wq, Wk, Wv]. Eventually, we obtain the updated vectorized representations {hi} Ng i=1 for all the nodes, and we treat them as the contextual information of the session. We remark that the GAT mechanism explicitly models cross-query relations by associating query representations with word representations. Such an approach is fundamentally different from existing methods, where the relations are either ignored (e.g., conventional Transformer attention) or captured via recursion (e.g., RNN-based approaches). ## 4.3 Session Representations Recall that we pass the source query through a Transformer encoder and obtain Hs ∈ R Ls×d. The matrix Hs contains representations for all the tokens in the source query. We use that of the ![4_image_0.png](4_image_0.png) prepended <boq> token as the representation of the source query, which is denoted hs ∈ R d. We adopt an aggregation network to extract useful information with respect to hs from the contextual information {hi} Nh+Nt i=1 . The network employs an attention mechanism that determines to what extent each vector hi contributes to the source query hs. Figure 4 illustrates the architecture of the aggregation network. Concretely, $$H_{\rm sess}=H_{s}+\sum_{i=1}^{N_{g}}\alpha_{i}W_{v}h_{i},\tag{3}$$ where $\alpha_{i}=\frac{\exp(z_{i})}{\sum_{j=1}^{N_{g}}\exp(z_{j})},z_{i}=(W_{k}h_{i})^{\top}h_{s},\,W_{k}$. and Wv are trainable weights. The summation in the last equation in Eq. 3 is conducted row-wise, wherein Hsess, Hs ∈ R Ls×d, and v ∈ R d. The matrix Hsess serves as the representation of the session. Intuitively, by incorporating the aggregation network, we can filter out redundant information from the session history and only keep the ones pertinent to the source query. After the Transformer encoder, the graph attention mechanism, and the aggregation network, we obtain Hsess, the session representation that contains information on both the source query and its history searches. Subsequently, Hsess is fed into the Transformer decoder to generate rewritten query candidates. The algorithm is detailed in Algorithm 1 in Appendix D. ## 5 Experiments We conduct experiments on some in-house data. We implement two methods with different model architectures: *Transformer+Aggregation+Graph* and *BART+Aggregation+Graph*. The first one is constructed in the previous section, and the second one employs a fine-tuning approach instead of training-from-scratch. The training details are deferred to Appendix B.3. More experimental results are shown in Appendix C. ## 5.1 Baselines For baselines with pre-training, we use MeshBART (Chen and Lee, 2020) and BART (Lewis et al., 2020). For baselines without pre-training, we use LQRW (He et al., 2016b), HRED (Sordoni et al., 2015) and MeshTransformer (Chen and Lee, 2020) (a variant of MeshBART where we train the model from scratch). We also compare our algorithm with two model variants: *Transformer+Aggregation* and BART+Aggregation, where we use the aggregation network but not the GAT mechanism. Please refer to Appendix B.1 for details. ## 5.2 Evaluation Metrics We use BLEU, MRR (Mean Reciprocal Rank), HIT@1, and HIT@16 to evaluate the query rewriting models. For all metrics except BLEU, we report the gains over the the results calculated by using only source queries. We remark that MRR, HIT@1, and HIT@16 (the percentage that the actual product is ranked within the first 16 products i.e., the first page when we search the rewritten query) are more important than BLEU, because MRR and HIT are directly linked to user experience. Please refer to Appendix B.2 for details about these metrics. ## 5.3 Experimental Results Table 1 summarizes experimental results. Recall that in our formulation, we rewrite a source query to a target query. The "target query" entry in Table 1 is the performance gain of the ground truth target query, i.e., this entry signifies upper bounds of performance gain that any model can achieve. We can see that the attention-based models (i.e., BART, MeshBART, Transformer and MeshTransformer) outperforms the recurrent neural networkbased approach (i.e., LQRW and HRED). This is because RNNs suffer from forgetting and training issues. In contrast, Transformer-based models use the attention mechanism instead of recursion to capture dependencies, which has proven to be more effective. Moreover, by aggregating history searches, BART+Aggregation and *Transformer+Aggregation* consistently outperform their vanilla alternatives. Essentially performance of these two methods indicate that integrating history queries into training is critical. The performance is further enhanced by incorporating the session graphs. Specifically, *Transformer+Aggregation+Graph* achieves the best performance under almost all the metrics. Notice that the HIT@16 metric gain improves from +15.9 to | The best results are shown in bold. Number of candidates | #Candidates=5 | #Candidates=10 | BLEU | | | | | |------------------------------------------------------------|-----------------|------------------|--------|-------|-------|--------|------| | Metric | MRR | HIT@1 | HIT@16 | MRR | HIT@1 | HIT@16 | | | Source Query | 0 | 0 | 0 | 0 | 0 | 0 | - | | Target Query | +16.1 | +10.6 | +29.0 | +16.1 | +10.6 | +29.0 | - | | Baseline methods LQRW | +3.5 | +2.5 | +6.4 | +6.8 | +4.9 | +12.6 | 29.4 | | HRED | +4.7 | +3.2 | +8.4 | +8.1 | +5.7 | +14.2 | 25.7 | | BART | +4.6 | +3.1 | +8.2 | +8.2 | +5.5 | +14.8 | 30.9 | | Transformer | +4.3 | +2.6 | +9.2 | +8.5 | +5.6 | +15.9 | 25.3 | | MeshBART | +5.0 | +3.8 | +8.7 | +8.3 | +5.8 | +14.3 | 31.7 | | MeshTransformer | +4.0 | +2.7 | +8.4 | +8.3 | +5.6 | +15.7 | 25.9 | | Our methods BART+Aggregation | +6.3 | +3.9 | +10.9 | +9.7 | +6.4 | +17.1 | 31.9 | | Transformer+Aggregation | +5.2 | +2.9 | +10.8 | +10.2 | +7.0 | +17.3 | 27.2 | | BART+Aggregation+Graph | +6.9 | +4.6 | +11.8 | +10.5 | +7.5 | +17.6 | 32.9 | | Transformer+Aggregation+Graph | +6.6 | +4.6 | +12.0 | +11.6 | +8.3 | +20.1 | 28.2 | +20.1 when employing both the aggregation network and the session graph formulation for the Transformer-based models. We highlight that the graph attention mechanism can directly captures cross-query relations, which is implausible for all the baselines. We can see that this property indeed contributes to model performance, i.e., HIT@16 increases from +17.3 to +20.1 when we equip *Transformer+Aggregation* with the GAT mechanism. Notice that BLEU is not a definitive metric. For example, the MRR and HIT metrics of HRED are consistently higher than those of LQRW, even though the BLEU score of the former is significantly lower than the latter. Also, compared with Transformer-based models, the BLEU score is consistently higher when using the BART model as the backbone. This is because a pre-trained language model contains more semantic information. However, the MRR and HIT metrics of the BARTbased models are slightly worse than those of the Transformer-based models. However, the BLEU score is comparable for models with the same backbone. For example, for Transformer vs. *Transformer+Aggregation* vs. *Transformer+Aggregation+Graph*, the BLEU scores are 25.3 vs. 27.2 vs. 28.2. Such a tendency coincides with the online metrics. We observe the same results from BART-based models. ## 5.4 Online A/B Test We conduct online A/B experiments on a largescale e-commerce shopping platform with our query rewriting models. For a given search query within a session, we generate one reformulated query using the proposed model, and we feed both the original query and rewritten query into the | with and without context. Example 1 | dodge led sign; dodge banners; | |---------------------------------------|-------------------------------------------| | History | mopar banner; mopar poster | | Source | dodger posters | | Target | dodge posters | | Rewritten w/o context | dodger flag | | Rewritten w/ context | dodge poster | | Example 2 | samsung galaxy case; | | History | samsung galaxy a11 case; samsung a11 case | | Source | samsung galaxy a7 | | Target | samsung galaxy a7 case | | Rewritten w/o context | samsung galaxy a7 charger | | Rewritten w/ context | samsung galaxy a7 case | search system. Experiments are conducted over five days, during which our system processed over 30 million sessions. The proposed method improves business metrics in terms of revenue; and also significantly decreases the number of reformulated searches. This indicates that the rewritten queries better meet customers' shopping intent since customers are able to find their desired products with less number of searches. ## 5.5 Case Studies ⋄ **Advantages of leveraging history information** Two examples are shown in Table 2. The first example is error correction. In the example, the customer wishes to purchase dodge (a car brand) posters, but she mistakenly searches for dodger (a baseball team) posters. Without history information, it is impossible to determine the customer's true intent. | Table 3: Two examples of generated queries and their associated likelihood. | | | | | |-------------------------------------------------------------------------------|---------------------------------|------------------------------|--------------------------|------------| | Type | Query | Likelihood | Query | Likelihood | | iphone 11 pro case pokemon; | colorado 2005 tail lights; | | | | | iphone 11 pro case eevee; | - | colorado 2005 door | - | | | iphone 11 pro case hetalia; | colorado 2005 accessories | | | | | iphone 11 pro case sailor moon | | | | | | History Source | iphone 11 pro case snow leopard | - | colorado headlights | - | | Target | iphone 11 pro case tiger | - | colorado 2005 headlights | - | | iphone 11 pro case disney | 0.497 | 2005 colorado headlights | 0.566 | | | iphone 11 pro case sailor moon | 0.492 | colorado headlights 2005 | 0.458 | | | iphone 11 pro case harry potter | 0.445 | colorado headlights led | 0.357 | | | iphone 11 pro case | 0.440 | colorado headlights assembly | 0.301 | | | iphone 11 pro case cute | 0.419 | colorado tail lights | 0.289 | | | iphone 11 pro case leopard | 0.391 | colorado headlights housing | 0.237 | | | iphone 11 pro case clear | 0.379 | colorado led headlights | 0.234 | | | iphone 11 pro case disney princess | 0.372 | 2004 colorado headlights | 0.230 | | | iphone 11 pro case pink | 0.364 | colorado 2004 headlights | 0.214 | | | iphone 11 pro case totoro | 0.353 | colorado headlights 2004 | 0.208 | | | Rewritten | | | | | However, by looking at session histories, we find that all the previous searches are related to automobiles (e.g., dodge and mopar), and therefore the query should be rewritten to "dodge posters". Our model successfully captures this pattern. Notice that the rewritten query without leveraging context does not match the user's intent. The second example is keyword refinement. In the example, by looking at the history searches, it is obvious that the customer wishes to find phone cases, instead of phones. However, this intent is impossible to capture by using only the source query. Our model automatically adds the keyword "case" to the source query and matches the target query. On the other hand, without the context information, the rewritten result is not satisfactory. ⋄ **Diversity of query generation** Table 3 demonstrates two examples. In the first example (the left three columns), notice that our model can grep information from history queries, e.g., "iphone 11 case sailor moon", and can delete keywords that are deemed insignificant or too restrictive, e.g., "iphone 11 case leopard" instead of "snow leopard". Also, our model can effectively capture domain information. For example, some of the history query keywords (e.g., pokemon, eevee) are often described as "cute", and our model recommends this keyword. All the history keywords are from Japanese anime series, therefore our model suggests another popular character, "totoro". Additionally, the "disney" and "disney princess" keywords are generated based on the interest to virtual characters. Finally, notice that the likelihood of all the suggested queries is similar, which means our model cannot single out a significantly better query than the others. Therefore our model generated a diverse group of queries. In the second example (the right two columns), the generated query successfully matches the target query. Note that the top two generated queries have high likelihood, and the likelihood decreases drastically as the suggested queries become more and more implausible. In this example, the first query is 172% more likely than the tenth query, whereas this number is only 41% in the previous example. This suggests that our model can differentiate between good quality suggestions and poor quality alternatives. ## 6 Conclusion And Discussion We propose an end-to-end context-aware query rewriting model that can efficiently leverage user's history behavior. Our model infers a user's purchase intent by modeling her history searches as a graph, on which a graph attention mechanism is applied to generate informative session representations. The representations are subsequently decoded into rewritten queries. Our proposed session graph can be extended to incorporate more information. Here, we present a bipartite graph, which contains words and queries. Additional components can be added as extra layers. For example, we can add product information such as categories to the session graph, which will create 3-partite session graphs (word, query and product). ## References Ricardo Baeza-Yates and Alessandro Tiberi. 2007. Extracting semantic relations from query logs. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 76–85. Bodo Billerbeck, Falk Scholer, Hugh E Williams, and Justin Zobel. 2003. Query expansion using associated queries. In Proceedings of the twelfth international conference on Information and knowledge management, pages 2–9. Huanhuan Cao, Daxin Jiang, Jian Pei, Qi He, Zhen Liao, Enhong Chen, and Hang Li. 2008. Context-aware query suggestion by mining click-through and session data. In *Proceedings of the 14th ACM SIGKDD* international conference on Knowledge discovery and data mining, pages 875–883. Ruey-Cheng Chen and Chia-Jung Lee. 2020. Incorporating behavioral hypotheses for query generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3105–3110, Online. Association for Computational Linguistics. Hang Cui, Ji-Rong Wen, Jian-Yun Nie, and Wei-Ying Ma. 2002. Probabilistic query expansion using query logs. In Proceedings of the Eleventh International World Wide Web Conference, WWW 2002, May 711, 2002, Honolulu, Hawaii, USA, pages 325–332. ACM. Hang Cui, Ji-Rong Wen, Jian-Yun Nie, and Wei-Ying Ma. 2003. Query expansion by mining user logs. IEEE Transactions on knowledge and data engineering, 15(4):829–839. Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca, and Pascal Fleury. 2017. Learning to attend, copy, and generate for session-based query suggestion. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017, pages 1747– 1756. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mihajlo Grbovic, Nemanja Djuric, Vladan Radosavljevic, Fabrizio Silvestri, and Narayan Bhamidipati. 2015. Context- and content-aware embeddings for query rewriting in sponsored search. In *Proceedings* of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, Santiago, Chile, August 9-13, 2015, pages 383–392. ACM. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016a. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. IEEE Computer Society. Yunlong He, Jiliang Tang, Hua Ouyang, Changsung Kang, Dawei Yin, and Yi Chang. 2016b. Learning to rewrite queries. In Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, pages 1443–1452. ACM. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural computation*, 9(8):1735– 1780. Chien-Kang Huang, Lee-Feng Chien, and Yen-Jen Oyang. 2003. Relevant term suggestion in interactive web search based on contextual information in query session logs. *Journal of the American Society for* Information Science and Technology, 54(7):638–649. Jeff Huang and Efthimis N Efthimiadis. 2009. Analyzing and evaluating query reformulation strategies in web search logs. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 77–86. Jyun-Yu Jiang and Wei Wang. 2018. RIN: reformulation inference network for context-aware query suggestion. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 2226, 2018, pages 197–206. ACM. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In *7th International* Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Stefan Riezler and Yi Liu. 2010. Query rewriting using monolingual statistical machine translation. *Computational Linguistics*, 36(3):569–582. Stefan Riezler, Alexander Vasserman, Ioannis Tsochantaridis, Vibhu Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 464–471, Prague, Czech Republic. Association for Computational Linguistics. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International Conference on Information and Knowledge Management, CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015, pages 553–562. ACM. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in Neural Information Processing Systems 30: Annual Conference on Neural* Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 6209–6219, Online. Association for Computational Linguistics. Xuanhui Wang and ChengXiang Zhai. 2007. Learn from web search logs to organize search results. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 87–94. Xuanhui Wang and ChengXiang Zhai. 2008. Mining term association patterns from search logs for effective query reformulation. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 479–488. Rong Xiao, Jianhui Ji, Baoliang Cui, Haihong Tang, Wenwu Ou, Yanghua Xiao, Jiwei Tan, and Xuan Ju. 2019. Weakly supervised co-training of query rewriting andsemantic matching for e-commerce. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM 2019, Melbourne, VIC, Australia, February 11-15, 2019, pages 402–410. ACM. Yatao Yang, Jun Tan, Hongbo Deng, Zibin Zheng, Yutong Lu, and Xiangke Liao. 2019. An active and deep semantic matching framework for query rewrite in e-commercial search engine. In *Proceedings of the* 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pages 309–318. ACM. ## A Transformer Encoder Details For a given source query, we first pad it with a <boq> (begin-of-query) token. Then, we pass the padded query through a token embedding layer and a position embedding layer, and we obtain Ys ∈ R Ls×d. Here Ls is the length of the padded source query, and d is the embedding dimension. Note that the position embedding can either be a sinusoidal function or a learned matrix. After the initial embedding layers, we pass Ys through the self-attention module. Specifically, we compute attention output S by $$S=\mathrm{Softmax}\left(\frac{Q K^{T}}{\sqrt{d_{k}}}V\right),$$ where $Q=Y_{s}W_{q},\ K=Y_{s}W_{k},\ V=Y_{s}W_{v}.$ Here Wq, Wk ∈ R d×dK , Wv ∈ R d×dV are learnable weights. In practice we use multi-head selfattention to increase model flexibility. To facilitate this, different attention outputs S1, · · · , SH are computed using different sets of weights {Wh q, Wh k , Wh v } H h=1. The final attention output is $$S=\left[S_{1},S_{2},\cdots,S_{H}\right]W_{o},$$ where Wo ∈ R HdV ×dis a learnable aggregation matrix. The attention output is then fed through a position-wise feed-forward neural network to generate encoded representation Hs ∈ R Ls×dfor the source query: $$H_{s}=\mbox{ReLU}\left(S\;W_{\mbox{\tiny FFN}}^{1}+b^{1}\right)W_{\mbox{\tiny FFN}}^{2}+b^{2}.\tag{6}$$ $\downarrow\downarrow\downarrow2\downarrow$ . Here {W1 FFN, W2 FFN, b1, b2} are weights of the neural network. Equations 4, 5, and 6 constitute as an encoder block. In practice we stack multiple encoder blocks to build the Transformer encoder, as demonstrated in Figure 2. For the history queries in this session, we also pad them with <boq> tokens. Suppose that we have Nh padded history queries (recall a session contains multiple history queries), and their respective length is denoted by L 1 h , · · · , LNh h. We pad the history queries to the same length, and we obtain the history query matrix Xh ∈ R Nh×Lh , where Lh = max{L 1 h , · · · , LNh h}. Then, following the same procedures as encoding the source query, we pass Xh through the embedding layers and the encoder blocks, after which we obtain the history query representations Uh ∈ R Nh×Lh×d. ## B Experiments Details B.1 Baselines The baselines are split into two groups: without pre-training and with pre-training. For the w/o pre-training group, we build the following models: ⋄ *Learning to Rewrite Queries (LQRW)* (He et al., 2016b) is one of the first methods that applies deep learning techniques to query rewriting. Specifically, the LQRW model combines a sequenceto-sequence LSTM (Hochreiter and Schmidhuber, 1997; Sutskever et al., 2014) model with statistical machine translation (Riezler and Liu, 2010) techniques to generate queries. The candidates are subsequently ranked using hand-crafted feature functions. ⋄ **Hierarchical Recurrent Encoder-Decoder** (HRED) (Sordoni et al., 2015) employs a hierarchical recurrent neural network for generative query suggestion. The model is a step forward from its predecessors in that HERD is sensitive to the order of queries and the method is able to suggest rare and long-tail queries. $$({\boldsymbol{5}})$$ ⋄ *Transformer* (Vaswani et al., 2017) has achieved superior performance in various sequence-tosequence (seq2seq) learning tasks. To adopt Transformer to query rewriting, we treat the source query as the source-side input, and the target query as the target-side input. Then we train a model using only these constructed inputs, similar to machine translation. Note that this setting resembles most of the existing works. We adopt the Transformer-base architecture, which contains about 72M parameters. ⋄ *MeshTransformer* (Chen and Lee, 2020) is a variant of MeshBART, where the pre-trained BART module is replaced by a Transformer and the model is trained from scratch. The method concatenates history queries to the source query in order to integrate contextual information. See the MeshBART method below for details. ⋄ *Transformer+Aggregation* is the model where we use the aggregation network to encode history search queries, i.e., without the graph attention mechanism. Specifically, we first obtain the representations of the source query and the history queries from the Transformer encoder. Then, we extract information related to the source query from the history representations using an aggregation network. Such information is added to the source representation, and we follow a standard decoding procedure using these two factors. See Section 4.3 for details. The second group of methods adopt pre-trained language models for query rewriting. ⋄ *BART* (Lewis et al., 2020) is a pre-trained seq2seq model. We adopt this particular model instead of, for example, BERT (Devlin et al., 2019) or GPT-2 (Radford et al., 2019), because we treat query rewriting as a seq2seq task. And the aforementioned architectures have either the Transformer encoder (e.g., BERT) or the Transformer decoder (e.g., GPT-2), but not both. In our experiments, BART is fine-tuned in a setting similar to training the Transformer model. We adopt the BART-base architecture in all the experiments, which contains about 140M parameters. ⋄ *MeshBART* (Chen and Lee, 2020) is a BARTbased model that first concatenates the history queries to the source query, and then feeds the concatenated input to a pre-trained BART model for query generation. Note that the original method requires click information. We remove this component as the proposed method do not need such data. ⋄ *BART+Aggregation* is similar to *Transformer+Aggregation*, except we replace the Transformer backbone with the pre-trained seq2seq BART model. ## B.2 Evaluation Metrics We use the BLEU score (Post, 2018) as an evaluation metric. This metric is constantly used to evaluate the quality of translation. We adopt it here because similar to machine translation, we formulate query rewriting as a seq2seq learning task. The correlation between the rewritten query and the target query reflects the model's ability to capture the user's purchase intent. The MRR metric describes the accuracy of the rewritten queries. For each source query in the test set, we generate 10 candidate queries r1, · · · , r10. Then we search each of these candidates using our production search engine, and we obtain the returned products, of which we only keep the top 32. Recap that we know the actual product that the customer purchased. The next step is to calculate the reciprocal of the actual product's rank for each of r1, · · · , r10. For example, suppose for r1, the actual purchased product is the second within the 32 returned products, then the score for r1 is score1 = 1/2 = 0.5. The score of the rewritten queries r1, · · · , r10 is then defined as max{scorei} 10 i=1. Finally, the score for the query rewriting model is the average over all the source query scores. We also use HIT@1 and HIT@16 as evaluation metrics. The HIT@16 metric is the percentage that the actual product is ranked within the first 16 products (the first page) when we search the rewritten query. And the HIT@1 metric is similarly defined. ## B.3 Training Details We use the *Fairseq* (Ott et al., 2019) code-base with PyTorch (Paszke et al., 2019) as the back-end to implement all the methods. All the experiments are conducted using 8 NVIDIA V100 (32GB) GPUs. For training a Transformer model from scratch, we adopt the Transformer-base (Vaswani et al., 2017) architecture. We use Adam (Kingma and Ba, 2015) as the optimizer, and the learning rate is chosen from {3 × 10−4, 5 × 10−4, 1 × 10−3}. We use 4 heads for the multi-head graph attention mechanism, where the head dimension is set to be 128 (note that the Transformer-base architecture has embedding dimension 512). For fine-tuning a BART model, we adopt the BART-base (Lewis et al., 2020) architecture. We use AdamW (Loshchilov and Hutter, 2019) as the optimizer, and the learning rate is chosen from {3 × 10−5, 5 × 10−5, 1 × 10−4}. Similar to the training from scratch scheme, we adopt 4 heads, each with dimension 192, for the graph attention mechanism. For both training-from-scratch and fine-tuning, please refer to1 Ott et al. (2019) for more details such as pre-processing steps and other hyperparameters. ## C More Experimental Results C.1 Analysis ⋄ **BART vs. Transformer** Even though BART contains twice the number of parameters compared with Transformer (140M vs. 70M), models finetuned on BART yield lower MRR and HIT metrics (Table 1). One reason is that publicly available pretrained models are pre-trained on natural language corpus, but queries are usually short and have distinct structures. This raises doubts on whether cur-1https://github.com/pytorch/fairseq/blob/ master/examples/translation/README.md rent pre-trained models are suitable for the query domain. Indeed, the rich semantic information enables a much better BLEU score (32.9 vs. 28.2), but the MRR and HIT metrics suggest the fine-tuned models' unsatisfactory performance. Another reason is that in a conventional finetuning task, a task-specific head is appended to the pre-trained model, and the head usually contains only a small number of parameters. But in the query rewriting task, both the aggregation network and the graph attention mechanism contain a significant amount of parameters (about 10% of BART). This is problematic because in finetuning, the learning rate is usually small since nearly all the weights are supposed to be meaningful and should not change much. Yet, in our case, we need to properly train a large amount of randomly initialized parameters. Moreover, the aggregation network and the GAT are added inside the pre-trained model (more specifically, they are added to the BART encoder) instead of appended after BART. Essentially this nullifies the pre-trained parameters on the decoder side, imposing additional challenges to the fine-tuning task. Nevertheless, the *BART+Aggregation* model still outperforms the vanilla BART model, and the performance is further improved by adding the GAT (i.e., *BART+Aggregation+Graph*). ⋄ **Training from scratch vs. fine-tuning** Figure 5 plots the training and validation perplexity (ppl) of the training-from-scratch approach and the finetuning approach. From Figure 5a and Figure 5b, we can see that by employing the aggregation network, Transformer+Aggregation fits the data better and exhibits enhanced generalization. The training and validation ppls are further significantly improved by incorporating the graph attention mechanism, i.e., by using *Transformer+Aggregation+Graph*, we achieve even better performance. Notice that in Figure 5c, *BART+Aggregation* outperforms *BART+Aggregation+Graph* in terms of training ppl, which is different from the trainingfrom-scratch approach. As indicated by Figure 5d, BART+Aggregation shows clear sign of over-fitting. This is because even though pre-trained language models contain rich semantic information, much of it is considered "noisy" for query rewriting. Thus feature enhancement initiated by the graph attention mechanism is needed. the relation between model size and performance, where we decrease the embedding dimension (correspondingly the FFNs' hidden dimensions) and the number of layers. We can see that even with 1/8 of the parameters, model performance does not decrease much. Moreover, our model is more than 20% smaller than a BERT-base model (85M vs. 110M), rendering online deployment more than possible. ⋄ **Query length vs. performance** Figure 7 demonstrates model performance regarding length of the instant query. We can see that the BLEU score gradually decreases when the length increases. This is because long queries are often very specific (e.g., down to specific models or makes), making the rewriting task harder. ## D Detailed Algorithm ⋄ **Model size vs. performance** Figure 6 illustrates ![12_image_0.png](12_image_0.png) Figure 5: Training and validation perplexity using Transformer and BART as backbone. ![12_image_1.png](12_image_1.png) ![12_image_2.png](12_image_2.png) ## Algorithm 1: Context-Aware Query Rewriting. Input: D: dataset containing sessions; Initial parameters for the Transformer encoder and the Transformer decoder; Initial parameters for two graph attention mechanism (Eq. 2): GATt→q, GATq→t; Initial parameters for the aggregation network (Eq. 3); K: the number of updates on the session graph; N: the number of rewritten queries for each session. Output: A list that contains N generated queries for each session in the dataset. Rewritten results: rewritten = {}; for *each session in* D do /* **Encode input data.** */ Compute source representation Hs and history representation Uh using the Transformer encoder; /* **Apply graph attention.** */ Obtain initial representations G 0 t , G 0 q ; for k = 1 *· · ·* K do G k t = GATq→t(G k−1 q, G k−1 t); G k q = GATt→q(G k t, G k−1/2 q ); end Set history representation {hi} Nt+Nh i=1 = G K t ∪ GK h ; /* **Apply aggregation network.** */ Compute session representation Hsess from Hs and {hi} Nt+Nh i=1 using Eq. 3; /* **Generate rewritten queries.** */ Generate N rewritten queries {qi} N i=1 using the Transformer decoder and a beam search procedure; rewritten ← rewritten ∪ {qi} N i=1; end Output: The rewritten queries.
xu-etal-2023-federated
Federated Learning of Gboard Language Models with Differential Privacy
https://aclanthology.org/2023.acl-industry.60
We train and deploy language models (LMs) with federated learning (FL) and differential privacy (DP) in Google Keyboard (Gboard). The recent DP-Follow the Regularized Leader (DP-FTRL) algorithm is applied to achieve meaningfully formal DP guarantees without requiring uniform sampling of clients. To provide favorable privacy-utility trade-offs, we introduce a new client participation criterion and discuss the implication of its configuration in large scale systems. We show how quantile-based clip estimation can be combined with DP-FTRL to adaptively choose the clip norm during training or reduce the hyperparameter tuning in preparation of training. With the help of pretraining on public data, we trained and deployed more than fifteen Gboard LMs that achieve high utility and {\$}{\textbackslash}rho-{\$}zCDP privacy guarantees with {\$}{\textbackslash}rho {\textbackslash}in (0.3, 2){\$}, with one model additionally trained with secure aggregation. We summarize our experience and provide concrete suggestions on DP training for practitioners.
# Federated Learning Of Gboard Language Models With Differential Privacy Zheng Xu∗, Yanxiang Zhang∗**, Galen Andrew, Christopher A. Choquette-Choo,** Peter Kairouz, H. Brendan McMahan, Jesse Rosenstock, Yuanbo Zhang Google ## Abstract We train language models (LMs) with federated learning (FL) and differential privacy (DP) in the Google Keyboard (Gboard). We apply the DP-Follow-the-Regularized-Leader (DP-FTRL) (Kairouz et al., 2021b) algorithm to achieve meaningfully formal DP guarantees without requiring uniform sampling of client devices. To provide favorable privacy-utility trade-offs, we introduce a new client participation criterion and discuss the implication of its configuration in large scale systems. We show how quantile-based clip estimation (Andrew et al., 2021) can be combined with DPFTRL to adaptively choose the clip norm during training or reduce the hyperparameter tuning in preparation for training. With the help of pretraining on public data, we train and deploy more than twenty Gboard LMs that achieve high utility and ρ−zCDP privacy guarantees with ρ ∈ (0.2, 2), with two models additionally trained with secure aggregation (Bonawitz et al., 2017). We are happy to announce that all the next word prediction neural network LMs in Gboard now have DP guarantees, and all future launches of Gboard neural network LMs will require DP guarantees. We summarize our experience and provide concrete suggestions on DP training for practitioners. ## 1 Introduction FL and Gboard LMs. In cross-device federated learning (FL), client devices collaboratively train a model without directly exchanging their local data (Kairouz et al., 2019). Google Keyboard (Gboard) was an early adopter of FL to train models that improve the user experience, following data minimization principles (Bonawitz et al., 2021) to protect users' privacy from some risks. Language models (LMs) are trained with FL to support various features in Gboard, including Next Word Prediction (NWP), Smart Compose (SC), and On-The- ![0_image_0.png](0_image_0.png) Figure 1: Gboard features supported by language models: NWP for next word, SC for inline suggestion, and OTF for candidates re-ranking. Fly rescoring (OTF). As illustrated in Fig. 1, NWP (Hard et al., 2018) uses an LM to suggest a word, which is triggered after a previous word is committed; SC provides longer inline suggestions to accelerate typing, which can be triggered per character when the confidence is high; OTF is used to re-rank the candidate words generated during typing before a word is committed. Models, metrics and tasks. We train LMs with the same neural network (NN) architecture described in (Hard et al., 2018): a one-layer LSTM/CIFG of 670 hidden neurons, with input and output word-based embeddings of dimension 96. OTF LMs use a larger vocabulary (∼ 30K words) compared to NWP LMs (∼ 10–20K words); the number of parameters for models with a 10K/20K/30K vocabulary is 2.4M/4.4M/6.4M, respectively. SC is a downstream task that reuses NWP LMs without any retraining from data. We train NWP LMs and OTF LMs from populations of devices categorized by language and location. For example, en-US NWP denotes the task of training NWP model on data generated by devices using English in the United States. Federated Averaging (FedAvg) (McMahan et al., 2017) and variants (Wang et al., 2021) are popular FL training algorithms in practice. In each communication *round*, the server will orchestrate a small subset of client devices for training and ∗Equal contribution, alphabetical order. Correspondence to {xuzheng,zhangyx}@google.com. 629 ![1_image_0.png](1_image_0.png) DP and DP-FTRL. Differential privacy (DP) can be combined with FL to provide a formal guarantee that the trained model will not memorize specific users' data, which provides stronger privacy protection by executing data anonymization principles (Bonawitz et al., 2021; Wang et al., 2021). Ramaswamy et al. (2020) applied DPFedAvg (McMahan et al., 2018; Geyer et al., 2017), a variant of DP-SGD (Abadi et al., 2016) for user/client-level DP, to train production LMs in FL. Ramaswamy et al. (2020) demonstrated anonymization via empirical auditing techniques by Carlini et al. (2019) but did not provide a formal DP guarantee. Achieving a strong formal DP guarantee for DP-FedAvg would require privacy amplification-by-sampling, which necessitates sampling clients uniformly at random on each round. However, a cross-device FL system has limited control over client sampling as devices have to satisfy local criteria such as being charging and connected to an unmetered network to be eligible for participation (Bonawitz et al., 2019; Balle et al., 2020). In contrast, we deploy a recent algorithm, DP-FTRL (Kairouz et al., 2021b), allowing us to achieve strong privacy and utility for production models without uniform sampling assumptions. Figure 2: System overview of federated learning of Gboard language models with differential privacy and secure aggregation. aggregate the resulting model deltas to update the global model. In a successful round, the system guarantees the number of clients participating in training is at least as large as the configured report goal (Bonawitz et al., 2019). A model is typically tested and deployed after training for several thousands of rounds. Top-1 in-vocab accuracy is used to track the utility during training and additional metrics for A/B testing are introduced in Sec. 3. Contributions. We discuss our strategy and experience of training Gboard LMs with FL and DP. We introduce an algorithm that enables adaptive clipping (Andrew et al., 2021) in DP-FTRL (Kairouz et al., 2021b) (Sec. 2.1), which can reliably estimate the clip norm to reduce hyperparameter tuning. We discuss the impact of scaling up computation and limiting client participation (Sec. 2.2), and identify the algorithm and system configurations for the regime of strong privacy and utility. We also successfully apply pre-training (Sec. 2.3) to improve privacy and utility, which is (to the best of our knowledge) the first time pretraining is applied to training a DP model directly from users' data. We combine DP-FTRL with secure aggregation (SecAgg) to further strengthen the data minimization properties of our approach (Sec. 2.4). Fig. 2 provides a system overview of the techniques for training Gboard language models with federated learning and differential privacy. Finally, we summarize concrete suggestions for practitioners training differentially private models to deploy in production in (Sec. 2.5), and present and analyze twenty Gboard LMs trained with formal DP guarantees (Sec. 3). We are happy to announce that all the next word prediction neural network LMs in Gboard now have DP guarantees, and all future launches of Gboard neural network LMs will require DP guarantees. ## 2 Dp Fl In Practice 2.1 Dp-Ftrl And Adaptive Clipping As described in Alg. 1, we apply DP-FTRL in FL by modifying the FedAvg algorithm: clip the model update ∆, and add noise when updating the global Algorithm 1 Federated DP-FTRL with adaptive clipping input : report goal m, learning rate for model weights on client ηc and on server ηs, momentum β = 0.9, noise multiplier for model delta z∆, total number of rounds T, restart rounds R = {128 + 1024*i, i* = 0, 1*, . . .*}, quantile based norm estimation C 0, target quantile γ = 0.5 , learning rate for norm ηγ = 0.2 , noise stddev for clip estimation σb = m/20 Initialize model θ 0, momentum buffer ∆¯ 0 = 0, clip norm Cθ = C 0 Initialize tree Tθ with z∆, Cθ, and Tb with σb for each round t = 0, 1, 2*, . . . , T* do Qt ← (at least m users for this round) for each user i ∈ Qt**in parallel do** (∆t i , bt i ) ← ClientUpdate(i, θt, ηc, Cθ, Ct) //Update model weights with noise addition ∆˜ t = 1 m PrivateSum Tθ,Pi∈Qk ∆k i , k ∈ [0, t] ∆¯ t = β∆¯ t−1 + ∆˜ t, θ t+1 ← θ 0 + ηs∆¯ t //Estimate quantile-based norm ˜b t = 1 m PrivateSum Tb,Pi∈Qk b k i , k ∈ [0, t] ![2_image_0.png](2_image_0.png) model. Two additional hyperparameters are introduced for DP: the *clip norm* C, which bounds the norm of ∆, and the *noise multiplier* z, which determines the standard deviation zC for the added Gaussian noise. We discuss clip norm in this section and defer the discussion of noise multiplier and other privacy related hyperparameters to Sec. 2.2. Andrew et al. (2021) introduced an adaptive clipping method that automatically adjusts the clip norm each round by privately estimating the norm of the model delta at a targeted quantile. However, adaptive clipping cannot be directly applied to DP-FTRL as the tree-based noise addition in DPFTRL assumes a fixed clip norm across rounds. We integrate adaptive clipping in DP-FTRL through //Restart and adjust clip norm **if**$t\in\mathcal{R}$**then** $\mathbf{C_{\theta}}\leftarrow\mathbf{C^{t+1}}$ Restart tree $\mathcal{T}_{\theta}$ and $\mathcal{T}_{\mathbf{b}}$ with updated $C_{\theta}$ function ClientUpdate(i, θ0, η, Cθ, C) θ ← θ0 G ← (user i's local data split into batches) for batch g ∈ G do θ ← θ − η∇`(θ; g) ∆ ← θ − θ0 b ← I||∆||≤C ∆0 ← ∆ · min 1, Cθ ||∆||//Clipping return (∆0, b) restarts, where the quantile estimate C tis continually tracked but only becomes an active clip norm Cθ upon tree restarting. As both the aggregated model delta ∆˜ tand the quantile ˜b t use tree-based noise, we can directly use the privacy accounting in (Kairouz et al., 2021b) by applying the noise transformation in Thm. 1 in App. A. In practice, Alg. 1 slightly inflates the noise for the model from zC to z∆C and requires *restarts* that complicate the privacy accounting for DPFTRL. Moreover, we find that a fixed clip norm can achieve comparable or slightly better model utility, and is more robust in experiments with large report goal. For example, adaptive clipping for the de-DE NWP model experiences catastrophic failure and makes no progress in the first 1000 rounds. Nevertheless, adaptive clipping can reduce hyperparameter tuning for many tasks when privacy budget allows. Fig. 3 shows the evaluation accuracy and corresponding clip norm for DP training the enGB NWP model with report goal 6500 and noise multiplier 7. The adaptive clip curve starts from a small initial clip norm to avoid catastrophic failure due to large initial noise and eventually catches up on accuracy. The estimated clip norm (quantile γ = 0.5) stabilizes and we can fix the clip norm to 5 based on the estimated value. The clip norm is relatively insensitive, especially when tuning together with the server learning rate. However, clip norm can have a wide tuning range across tasks and models, and quantile-based estimation is still useful for estimating a clip norm to be fixed. ## 2.2 Dp Parameters And System Configuration The privacy guarantees of DP-FTRL (Kairouz et al., 2021b) are affected by several factors: noise multiplier z, number of total rounds T, max participation (MaxP) of a client, and min separation (MinS) of rounds between the participation of the same client. The noise multiplier is a conventional parameter for controlling privacy-utility trade-off: large noise achieves strong privacy guarantees but can potentially hurt the utility. Achieving the same utility with smaller rounds T can significantly improve the privacy guarantees. Next, we discuss the effect of MaxP and MinS, and the privacy-utilitycomputation trade-off for system configuration. Client participation. DP-FTRL achieves strong privacy if each client only participates once during training, or the number of client participation is limited when a client can participate multiple times. Two parameters are introduced to characterize client participation for DP-FTRL: the maximum participations (MaxP) of a client in all training rounds and the minimum round separation (MinS) between any single client's two participations. MaxP and MinS are correlated as MaxP is upper bounded by rounds T divided by MinS. In general, for fixed rounds T, decreasing MaxP and increasing MinS can lead to stronger privacy guarantees without changing utility. In addition, Cho et al. (2023) suggests potential advantage of increasing MinS for utility. When using the worst-case MaxP estimated by rounds T divided by MinS, Fig. 4c shows increasing MinS can achieve stronger privacy measured by smaller zCDP values. However, the maximum MinS is limited by the population size divided by the number of clients per round lower bounded by the report goal. For example, when the report goal is 6500 for small population of around 106, MinS has to be smaller than 153 rounds, so strong privacy guarantees are difficult to achieve when training for 3000 rounds. While we cannot measure the precise population size in the FL system due to client dynamics, we estimate the population size of various Gboard tasks as ranging from 0.8 million to 16.6 million in Tab. 1. Report goal. We study report goal for privacycomputation trade-off based on a hypothesis used in (McMahan et al., 2018; Kairouz et al., 2021b; Xu et al., 2022): for sufficiently large data, the utility is approximately non-decreasing if the noise multiplier and clients per round (lower bounded by report goal) proportionally increase. We provide empirical justification to this hypothesis by comparing the evaluation accuracy of two training runs: one with a report goal of 500 and noise multiplier of 0.54, versus another of report goal 6500 and noise multiplier 7. On more than three Gboard language tasks, we observed that the final utility remains similar, or slightly better for larger report goals. Moreover, using a larger report goal speeds up learning at the beginning of training. Based on the hypothesis, we plot Figs. 4a and 4b by linearly increasing report goal and noise multiplier, and assuming the MinS is set to the maximum possible value (population divided by report goal) for strong privacy. Though a large report goal can limit the MinS, it generally leads to stronger privacy guarantees for reasonable population size and total rounds. System configuration. According to Figs. 4a and 4b, we choose a large report goal of 6500 supported by the large scale FL systems and aim for maximum MinS for DP-FTRL. To control MinS in practice, a timer is introduced on clients in the FL system so that a client will only become eligible to participate in training (again) after a certain period of time has passed. McMahan and Thakurta (2022) used a timer period of 24 hours to train the es-ES NWP model, which led to an observed MinS of 313. The MinS of es-ES is upper bounded by 4.21M/6500 ∼ 647 and can be potentially improved by increasing the timer period. We increase the timer period in the unit of 24 hours due to the uneven diurnal participation pattern (Yang et al., 2018; Zhu et al., 2022), and generally observe that MinS can proportionally increase with the timer period before reaching the possible maximum. However, there are many factors in the FL system that may affect the wall clock training speed, which makes it challenging to optimize the timer period to maximize MinS. ## 2.3 Public Pretraining We explore pretraining on public data for production models, which were shown to substantially improve model utility in DP simulations (Li et al., 2021; De et al., 2022; Xu et al., 2022; Wang et al., ![4_image_0.png](4_image_0.png) 2023). We pretrain a model for each Gboard language task using the multi-lingual C4 dataset (Raffel et al., 2019; Xue et al., 2020) collected from public web pages. Fig. 3a shows that pretraining can reduce ∼ 1000 rounds to reach a given utility threshold under the same noise multiplier, which can significantly improve the privacy guarantees as shown in Fig. 4. We additionally observe that: (1) it is challenging to fine-tune from a pretrained model when the word embeddings are shared for input and output to reduce the parameter size of LMs for on-device deployment; (2) the accuracy may decrease in the first a few rounds of fine-tuning; (3) pretraining helps with diminishing marginal returns: at some point further pretraining does not necessarily improve the final performance. Therefore, we use models with separate input and output embeddings and pretrain with half of the C4 dataset for Gboard LMs. ## 2.4 Combining With Secure Aggregation Secure aggregation (SecAgg) (Bonawitz et al., 2017) ensures that the central server can only access the aggregated update from a large set of clients, preventing inspection of individual client updates. We combine SecAgg and DP-FTRL to provide strong data minimization and anonymization protection (Bonawitz et al., 2021). To avoid the suboptimal privacy cost from the `2 norm increase of the discretized vector in SecAgg, we follow the protocol of (Kairouz et al., 2021a) for discretizing, flattening, and modularly clipping1 the client model updates—this introduces minimal norm inflation later accounted in DP-FTRL. The large report goal requirement for strong DP 1In our current implementation, there is a very small chance that modular operator in SecAgg will inflate the sensitivity. The problem will be fixed by an additional element-wise clipping of the flattened vector. guarantees is challenging for SecAgg in practice, which requires a slightly different system configuration. The SecAgg training speeds we observe are still notably slower, and we leave for future work potential improvements such as compression for communication efficiency (Chen et al., 2022), new DP methods to reduce report goal (ChoquetteChoo et al., 2022), and embedding compression to reduce round time (Shu and Nakayama, 2017). ## 2.5 Recommended Strategies And Practices We summarize our strategy for training Gboard LMs with DP. (1) Pre-train the model on public datasets if possible. (2) Choose the maximum noise multiplier that meets the utility target based on small report goal simulation experiments on public datasets that is similar to the production task. (3) Based on the target number of rounds and estimated population, linearly increase the report goal and noise multiplier to meet the privacy target, and choose a large report goal supported by the system. If the privacy target is unachievable, fix the report goal to maximum, and increase the noise multiplier to target on a model with suboptimal utility. (4) Estimate the possible maximum MinS based on chosen report goal and estimated population, and configure the timer period to approach the MinS; use previous experience of model training speed if applicable. (5) If the hyperparameters (e.g., learning rates) are known from previous experiments or simulation on public datasets, apply DP-FTRL with adaptive clipping (Alg. 1) without manual tuning to try meet the privacy and utility goals. Note that Alg. 1 needs to account the noise inflation and restart for privacy guarantees. (6) If Alg. 1 fails or stronger privacy and utility are desirable, we can run a few small report goal experiments with Alg. 1 that tune quantile γ and server learning rate ηs, select the best learning rate, and fix the clip norm based on the estimation; and run DP-FTRL with large report goals. (7) SecAgg can be used for all experiments, and precise MaxP and MinS are computed by post-processing for privacy accounting. ## 3 Deploying Dp Lms A/B test metrics. We introduce metrics in A/B test to measure the utility of Gboard LMs. (1) *Picked* Rate (PRate): the ratio of picked candidates among the NWP predictions; or SC predictions when it is triggered. (2) *Accuracy (Acc)*: the ratio of candidates matching the final committed words among the NWP model predictions. (3) *Trigger Rate*: the ratio of words with SC triggered among all committed words, which is an important metric when PRate is fixed. (4) *Word Modified Ratio (WMR)*: the ratio of words being modified during typing or after committed; improvement is shown by reduction. (5) *Word Per Minute (WPM)*: the number of committed words per minute. Privacy guarantees. Same as (McMahan and Thakurta, 2022), the zero-one device neighboring relationship ((Kairouz et al., 2021b, definition 1.1)) is adopted for DP. For user's with a single device, device-level DP corresponds directly to user-level DP. Our privacy guarantee holds for all well-behaved clients during training, and we do not account for privacy cost of modest amount of hyperparameter tuning. DP is measured by the zero-Concentrated DP (zCDP) (Bun and Steinke, 2016) guarantee that has been used by US census bureau (US Census Bureau, 2021), and can be easily converted to (, δ)-DP. We use the privacy accounting in (Kairouz et al., 2021b, appendix D) implemented in Tensorflow Privacy (TFP Authors, 2022), and follow the guidelines outlined in (Ponomareva et al., 2023, Sec. 5.3) to report detailed narratives of privacy guarantees in App. C. Experimental setup. We use the implementation in App. B, and apply the strategy in Sec. 2.5 to train Gboard LMs with DP. We present NWP results in Tab. 1, and OTF results in Tab. 2. As Smart Compose (SC) reuses NWP LMs, SC has the same DP guarantees as NWP models by the postprocessing property (Dwork et al., 2014). Following es-ES NWP model in (McMahan and Thakurta, 2022), we choose noise multiplier 7 and report goal 6500 based on simulation in (Kairouz et al., 2021b) on public StackOverflow dataset (TFF Authors, 2022b). We pretrain the models on public datasets and configure the timer period to control client participation, separately for different tasks. We use DP-FTRL with adaptive clipping and small report goal 500 to tune server learning rate and estimate the clip norm. Interestingly, we observe the learning rate and clip norm to be consistent for various Gboard LMs, and tuning seems to be unnecessary. DP-FTRL with fixed clip and large report goal is used to run the final model for deployment. Result analysis. All NWP and OTF models in Tabs. 1 and 2 are trained with stronger guarantees (smaller zCDP) compared to zCDP > 2.6 used by US Census Bureau (US Census Bureau, 2021). For five NWP models in Europe (DE, GB, FR, IT, PT), the DP NN models significantly improve the utility compared to previous N-gram models. On en-US, pt-BR and en-IN, DP NN models also achieve comparable, or slightly better utility compared to their non-private versions as the strong models. SecAgg is successfully applied to en-US and es-ES, and can achieve good privacy-utility trade-off with a smaller number of rounds, likely due to the system configuration that results in more clients per round. However, SecAgg is also notably slower. There is a general positive correlation between the estimated population size and privacy guarantees. However, only a few tasks approach the possible maximum MinS for strong privacy guarantees, which highlights the challenge of both estimating population and controlling client participation. Longer training rounds are often used for NWP (compared to OTF) as the non-private NN baselines are strong, and to improve the downstream SC performance. As an example, we train es-ES NWP for 1900 rounds with a pretrained model, while the previous models (McMahan and Thakurta, 2022) is trained for 2000 rounds without pretraining. Our es-ES NWP model slightly improves the utility measured by PRate and Acc, and improves the zCDP bound from 0.81 to 0.35 due to the larger MinS by timer configuration. We highlight that our es-ES model at round 1240 already achieves similar NWP utility and a strong privacy guarantee, but the utility of SC keeps improving with training. Compared to the previous model in (McMahan and Thakurta, 2022), our model improves the SC trigger rate by 4.23% at round 1240, and 9.51% at round 1900. ## 4 Concluding Remarks We discuss our experience and summarize our strategy for training production Gboard LMs with FL | NWP | Rounds | Utility | Privacy | Est. Pop. (M) | BaseModel | | | |-----------|----------|-----------------|-----------|-----------------|-------------|------|-----------| | PRate(+%) | Acc(+%) | MinS/MaxP/Timer | zCDP | | | | | | de-DE | 930 | 8.28 | 12.49 | 212 / 4/ 48h | 0.48 | 3.24 | | | en-GB | 980 | 3.26 | 7.72 | 226 / 4 / 72h | 0.48 | 2.38 | | | fr-FR | 1280 | 3.78 | 8.50 | 180 / 5 / 72h | 0.89 | 2.79 | N-gram | | it-IT | 1620 | 3.98 | 9.86 | 303 / 5 / 72h | 0.71 | 3.32 | | | pt-PT | 530 | 3.99 | 7.82 | 54 / 8 / 48h | 1.86 | 0.83 | | | es-ES | 1900 | 0.29 | 0.48 | 526 / 3 / 144h | 0.35 | 4.21 | zCDP 0.81 | | es-ES* | 1750 | 0.32 | 0.56 | 349 / 4 / 144h | 0.52 | | | | en-US | 2800 | -0.39 | 0.11 | 371 / 7 / 48h | 1.31 | 13 | No-DP NN | | en-US* | 1360 | -0.30 | 0.15 | 622 / 2 / 144h | 0.25 | | | | pt-BR | 3600 | 0.18 | 0.29 | 909 / 3 / 144h | 0.45 | 16.6 | | | en-IN | 1290 | 0.19 | 0.40 | 170 / 6 / 96h | 1.14 | 7.72 | | | es-MX | 1980 | -0.15 | 0.29 | 343 / 5 / 96h | 0.64 | 9.96 | | | es-AR | 640 | 0.25 | 3.50 | 90 / 5 / 96h | 0.84 | 4.09 | Mix | | OTF | Rounds | Utility | Privacy | | | | |---------|----------|-----------------|-----------|---------------|------|------| | WMR(-%) | WPM(+%) | MinS/MaxP/Timer | zCDP | DP-(δ = 10−10) | | | | de-DE | 1170 | 1.01 | 0.59 | 206 / 5 / 48h | 0.89 | 9.01 | | en-GB | 1220 | 1.99 | 0.38 | 206 / 5 / 72h | 0.89 | 9.01 | | es-ES | 1280 | 1.03 | 0.60 | 197 / 5 / 48h | 0.89 | 9.01 | | fr-FR | 1300 | 1.83 | 0.67 | 290 / 4 / 72h | 0.61 | 7.31 | | it-IT | 1360 | 1.39 | 0.80 | 188 / 5 / 48h | 0.89 | 9.01 | | ru-RU | 870 | 0.72 | 0.34 | 327 / 3 / 48h | 0.32 | 5.13 | | pt-PT | 430 | 1.71 | 0.32 | 54 / 7 / 48h | 0.99 | 9.56 | and DP. We propose an algorithm applying adaptive clipping (Andrew et al., 2021) in DP-FTRL (Kairouz et al., 2021b) to reduce the hyperparamter tuning. We discuss the impact on privacy and utility of several important factors: the clip norm, report goal, client participation, and pre-training. Our study highlights the importance of system and algorithm co-design for differential privacy in practice, the challenges of tuning in FL systems, and opportunities to improve the scalability and stability of FL with DP and/or SecAgg. More than twenty LMs with formal DP guarantees are trained and launched to support Gboard NWP, SC, and OTF features, including en-US and es-ES NWP models additionally with SecAgg. Our experience demonstrates the possibility of training DP models for practical applications when a large scale system is available for large scale data. Therefore, Gboard is introducing and enforcing a new policy: DP has to be applied in all future training and launching of Gboard LMs. ## Acknowledgement The authors would like to thank Stanislav Chiknavaryan, Adria Gascon, Zachary Garrett, and Timon Van Overveldt for infrastructure configuration support; Swaroop Ramaswamy, Om Thakkar, Abhradeep Thakurta for early discussion on models and algorithms; Jeremy Gillula for internal review process; Xu Liu, Shumin Zhai, and Daniel Ramage for leadership support. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC conference on computer and communications security*, pages 308–318. Galen Andrew, Om Thakkar, H Brendan McMahan, and Swaroop Ramaswamy. 2021. Differentially private learning with adaptive clipping. *Conference on* Neural Information Processing Systems (NeurIPS). Borja Balle, Peter Kairouz, Brendan McMahan, Om Thakkar, and Abhradeep Guha Thakurta. 2020. Privacy amplification via random check-ins. *Advances in Neural Information Processing Systems*, 33:4623–4634. Kallista Bonawitz, Peter Kairouz, Brendan McMahan, and Daniel Ramage. 2021. Federated learning and privacy: Building privacy-preserving systems for machine learning and data science on decentralized data. *Queue*, 19(5):87–114. Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecnˇ y, Stefano Mazzocchi, ` Brendan McMahan, et al. 2019. Towards federated learning at scale: System design. Proceedings of machine learning and systems, 1:374–388. Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191. Mark Bun and Thomas Steinke. 2016. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In *Theory of Cryptography Conference*, pages 635–658. Springer. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In *28th USENIX Security Symposium* (USENIX Security 19), pages 267–284. Wei-Ning Chen, Christopher A Choquette Choo, Peter Kairouz, and Ananda Theertha Suresh. 2022. The fundamental price of secure aggregation in differentially private federated learning. In International Conference on Machine Learning, pages 3056–3089. PMLR. Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, and Tong Zhang. 2023. On the convergence of federated averaging with cyclic client participation. *arXiv preprint arXiv:2302.03109*. Christopher A Choquette-Choo, H Brendan McMahan, Keith Rush, and Abhradeep Thakurta. 2022. Multi-epoch matrix factorization mechanisms for private machine learning. arXiv preprint arXiv:2211.06530. Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. 2022. Unlocking highaccuracy differentially private image classification through scale. *arXiv preprint arXiv:2204.13650*. DP Team. 2022. Google's differential privacy libraries. https://github.com/google/ differential-privacy. Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3– 4):211–407. Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. *arXiv preprint arXiv:1712.07557*. Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. *arXiv preprint arXiv:1811.03604*. Peter Kairouz, Ziyu Liu, and Thomas Steinke. 2021a. The distributed discrete gaussian mechanism for federated learning with secure aggregation. In *International Conference on Machine Learning*, pages 5201–5212. PMLR. Peter Kairouz, Brendan Mcmahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu. 2021b. Practical and private (deep) learning without sampling or shuffling. In International Conference on Machine Learning (ICML), pages 5213–5225. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kaylee Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. 2019. Advances and open problems in federated learning. *CoRR*, abs/1912.04977. Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto. 2021. Large language models can be strong differentially private learners. arXiv preprint arXiv:2110.05679. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In *AISTATS*, pages 1273– 1282. PMLR. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning differentially private recurrent language models. In *International Conference on Learning Representations (ICLR)*. Brendan McMahan and Abhradeep Thakurta. 2022. Federated learning with formal differential privacy guarantees. Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H. Brendan McMahan, Sergei Vassilvitskii, Steve Chien, and Abhradeep Thakurta. 2023. How to dp-fy ml: A practical guide to machine learning with differential privacy. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*. Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H Brendan McMahan, and Françoise Beaufays. 2020. Training production language models without memorizing user data. *arXiv preprint* arXiv:2009.10031. Raphael Shu and Hideki Nakayama. 2017. Compressing word embeddings via deep compositional code learning. *arXiv preprint arXiv:1711.01068*. TFF Authors. 2022a. TensorFlow Federated. https: //github.com/tensorflow/federated. TFF Authors. 2022b. TensorFlow Federated StackOverflow dataset. https://www.tensorflow.org/ federated/api_docs/python/tff/simulation/ datasets/stackoverflow. TFP Authors. 2022. TensorFlow Privacy. https:// github.com/tensorflow/privacy. US Census Bureau. 2021. Disclosure avoidance for the 2020 census: An introduction. Boxin Wang, Yibo Jacky Zhang, Yuan Cao, Bo Li, H Brendan McMahan, Sewoong Oh, Zheng Xu, and Manzil Zaheer. 2023. Can public large language models help private cross-device federated learning? arXiv preprint arXiv:2305.12132. Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, et al. 2021. A field guide to federated optimization. *arXiv:2107.06917*. Zheng Xu, Maxwell Collins, Yuxiao Wang, Liviu Panait, Sewoong Oh, Sean Augenstein, Ting Liu, Florian Schroff, and H Brendan McMahan. 2022. Learning to generate image embeddings with user-level differential privacy. *arXiv preprint* arXiv:2211.10844. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. *arXiv preprint arXiv:2010.11934*. Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, and Françoise Beaufays. 2018. Applied federated learning: Improving google keyboard query suggestions. *arXiv preprint arXiv:1812.02903*. Chen Zhu, Zheng Xu, Mingqing Chen, Jakub Konecnˇ y,` Andrew Hard, and Tom Goldstein. 2022. Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions. In International Conference on Learning Representations. ## A Privacy Accounting For Adaptive Clipping Theorem 1 (Privacy Accounting for Adaptive Clipping (Andrew et al., 2021)). *One step of DP-FTRL with* adaptive clipping using σb *noise standard deviation on the clipped counts* Pb t i and z∆ noise multiplier on the vector sums P∆t i is equivalent to one step of non-adaptive DP-FTRL with noise multiplier z if we set z∆ = z−2 − (2σb)−2−1/2. ## B Implementation. We use the open source implementation of DP-FTRL in Tensorflow Privacy (TFP Authors, 2022) integrated with Tensorflow Federated (TFF Authors, 2022a) as a DP aggregator for federated learning. Conceptually, DP-FTRL adds noise to the summation of updates across rounds, i.e., *PrivateSum* in Alg. 1. Instead of tracking the noise and summation separately, PrivateSum is implemented to only track the noise and updates ˜θ t−1 by adding the residual of noise between round t and round t − 1. This design makes it easy to integrate with various optimizer choices, for example, momentum that is important for utility; and also allows ephemeral access of model deltas without directly storing unnoised states. ## C Reporting Privacy Guarantees This section clarifies the nuances of the reported DP guarantees following the guidelines outlined in (Ponomareva et al., 2023, Sec. 5.3) 1. **DP setting**. This a central DP guarantee where the service provider is trusted to correctly implement the mechanism. 2. **Instantiating the DP Definition** (a) *Data accesses covered*: The DP guarantee applies to all well-behaved clients2in a single training run. We do not account for hyperparameter tuning in our guarantees. Public multilingual C4 data (Raffel et al., 2019; Xue et al., 2020) is used for pre-training. (b) *Final mechanism output*: Only the final model checkpoint is released for production launches, however the mechanism's output is technically the full sequence of privatized gradients, and so the guarantee also applies at this level, and hence all intermediate models are protected (including those sent to devices participating in federated learning). (c) *Unit of privacy*. Device-level DP is considered, i.e., the notion of adjacency is with respect to arbitrary training datasets on each client device, and the device might have an arbitrarily large local dataset containing arbitrary training examples. For user's with a single device, this corresponds directly to user-level DP; for devices shared with multiple users, this provides a stronger notion of DP than user-level; for a user with multiple devices that happen to both participate in training the model, the notion is weaker, but group privacy can be used to obtain a user-level guarantee. (d) *Adjacency definition for "neigbouring" datasets*: We use the zero-out definition (Kairouz et al., 2021b). This is a a special form of the add-or-remove definition, where neighboring data sets differ by addition/removal of a single client. In the absence of a client at any training step, we assume that the client's model update gets replaced with the all zeros vector. This assumption enforces a subtle modification to the traditional definition of the add/remove notion of DP which allows neighboring data sets to have the same number of records. 3. **Privacy accounting details** (a) *Type of accounting used*: Both ρ−zCDP (Bun and Steinke, 2016) accounting, and PLD accounting (DP Team, 2022) for (, δ)−DP are used. (b) *Accounting assumptions* : Each client only participates limited times during the training, and there are at least a min-separation number of rounds between two consecutive participation of a 2Clients that faithfully follow the algorithm including participation limits. Due to the design of the algorithm, a mis-behaved client does not adversely affect the DP guarantee of any well-behaved clients. client, i.e., MaxP and MinS as discussed in Sec. 2.2. Client participation is enforced by a timer on clients in the cross-device FL system. (c) *The formal DP statement*: The launched Gboard LMs have ρ−zCDP range in (0.2, 2). We also transform zCDP to (, δ)−DP by PLD accounting (DP Team, 2022): given δ = 10−10, the smallest zCDP ρ = 0.25 corresponds to DP = 4.49; the largest zCDP ρ = 1.86 corresponds to DP = 13.69. (d) *Transparency and verifiability*: We open sourced our core implementation code in TensorFlow Federated and Tensorflow Privacy. Key portions of the cross-device FL system are also open sourced.
ghosh-etal-2023-radling
{R}ad{L}ing: Towards Efficient Radiology Report Understanding
https://aclanthology.org/2023.acl-industry.61
Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus. There are few pretrained language models trained specifically for radiology, and fewer still that have been trained in a low data setting and gone on to produce comparable results in fine-tuning tasks. We present RadLing, a continuously pretrained language model using ELECTRA-small architecture, trained using over 500K radiology reports that can compete with state-of-the-art results for fine tuning tasks in radiology domain. Our main contribution in this paper is knowledge-aware masking which is an taxonomic knowledge-assisted pre-training task that dynamically masks tokens to inject knowledge during pretraining. In addition, we also introduce an knowledge base-aided vocabulary extension to adapt the general tokenization vocabulary to radiology domain.
# Radling: Towards Efficient Radiology Report Understanding Rikhiya Ghosh1, Sanjeev Kumar Karn1, Manuela Daniela Danu2**, Larisa Micu**2, Ramya Vunikili1 **And Oladimeji Farri**1 1Digital Technology and Innovation, Siemens Healthineers, Princeton 2Corporate Technology, Siemens AG, Romania {rikhiya.ghosh,sanjeev.kumar_karn, oladimeji.farri}@siemens-healthineers.com {manuela.danu,larisa.micu}@siemens.com rdv253@nyu.edu ## Abstract Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus. There are few pretrained language models trained specifically for radiology, and fewer still that have been trained in a low data setting and gone on to produce comparable results in fine-tuning tasks. We present RadLing, a continuously pretrained language model using ELECTRA-small (Clark et al., 2020) architecture, trained using over 500K radiology reports, that can compete with state-of-the-art results for fine tuning tasks in radiology domain. Our main contribution in this paper is knowledge-aware masking which is a taxonomic knowledge-assisted pretraining task that dynamically masks tokens to inject knowledge during pretraining. In addition, we also introduce an knowledge base-aided vocabulary extension to adapt the general tokenization vocabulary to radiology domain. ## 1 Introduction Radiology reports are radiologist interpretations of medical images such as X-Rays, CT, Ultrasound and MRI scans. Healthcare professionals rely on these reports to monitor and diagnose patients. A radiology report typically includes several sections (Kahn Jr et al., 2009), among which the most important ones are the following: 1. CLINICAL SECTION. This section describes afflictions of the patient that prompted the study, past diseases and symptoms. 2. COMPARISON. This refers to previous imaging studies of the patient with which the radiologist is comparing the current image. 3. FINDINGS. This section includes qualitative and quantitative descriptions of abnormalities if present, along with the radiologist's diagnosis or differential diagnosis regarding the observations. 4. IMPRESSIONS. This section summarises the FINDINGS section. The radiologist notes major abnormalities and their recommendations. 5. MISCELLANEOUS. This consists of other information like patient information, imaging modality. The post-BERT era (Devlin et al., 2018) of contextualized pretrained language models (PLMs) has drastically reduced the need for expensive and hard-to-find human annotated data for biomedical NLP. Biomedical PLMs are usually trained on biomedical publications (Gu et al., 2020; Lee et al., 2020; Gururangan et al., 2020; Peng et al., 2019; Alsentzer et al., 2019; Lin et al., 2021; Yuan et al., 2022; Luo et al., 2022) and have facilitated drug discovery and healthcare informatics. Despite sharing the same concepts, challenges remain in adapting these models to radiology reports. This is because the contents, context and structure of biomedical publications are significantly different from those of the radiology reports. Radiology reports are terse, and concept-dense. It is shown that models pretrained on radiology reports improve performance in downstream clinical NLP tasks as opposed to models pretrained on biomedical publications (Yan et al., 2022; Smit et al., 2020; Dai et al., 2021). While this is encouraging, we identified some research challenges as well as encountered issues in adapting these PLMs to industrial setting: Research challenges. Random masking in masked language modeling (MLM) is known to have context understanding issues(Song et al., 2020). This is exemplified in tasks like relationship extraction (Jain et al., 2021) in radiology reports. Figure 1 shows an example sentence from CheXpert (Irvin et al., 2019). The entities 'Pneumonia' and 'lungs' are related concepts, but current state-of-the-art PLM-based methods fail to identify their relation in the sentence. ![1_image_0.png](1_image_0.png) Industry adaptation issues. Industry adaptation issues emanate from data and model sizes. Training datasets available for industry are generally small. Use of public datasets like MIMIC-IV (Johnson et al., 2020) is not feasible for industry research due to license restrictions. Large PLMs like PubMedBERT (109 million parameters) and best performing RadBERT variant(125 million) have latency issues in industrial deployment, with low throughput when deployed in low memory settings. Reduction in parameters by quantisation or distillation reduces efficiency which is less than ideal. We believe that context understanding problem can be avoided with the help of domain knowledge. In radiology domain, an excellent source of domain knowledge is the taxonomical knowledge base RadLex (Langlotz, 2006), which curates radiology lexicons. For the sentence in Figure 1, current PLMs misclassify the relationship. However, a look at RadLex for the entity 'Pneumonia' reveals its *Anatomical_site* property to be 'Lungs'. The RadLex property *Anatomical_site* and the RadGraph relation 'Located_at' are analogous. Infusing this knowledge into PLMs adds more context and domain knowledge and thus increases prediction capabilities. In this paper, we introduce RadLing, the first radiology language model based on ELECTRA-small architecture with 13.7 million parameters trained with the help of RadLex. Our major contributions in this paper are: (a) Domain-specific vocabulary extension where we have modified existing tokenization methods with help of RadLex, (b) Knowledge base-aided continuous pretraining objective that abets better context understanding, and (c) Smaller high performance radiology language model. ## 2 Related Work Radiology Report Understanding and Radiology summarization are the most-addressed NLP tasks that are related to the field of radiology. Radiology summarization involves generating a summary of a radiology report, which can help clinicians quickly understand the key findings without having to read the entire report (Karn et al., 2022). Radiology Report Understanding involves extracting information from a radiology report, such as the type of imaging study, problem list generation, etc. This information can then be used to assist with diagnosis and treatment planning. Radiology report understanding using MLMs. Yan et al. (2022) has used MLM with several pretrained PLMs like BERT, RoBERTa(Liu et al., 2019) and Clinical BERT (Huang et al., 2019) to train on 4.4 million radiology reports. These models show high performance in NER, RE, QA, abnormal classification and summarisation. In contrast, we aim to train with knowledge graph pretraining objectives that might help learn with much smaller dataset with 545k reports, and achieve comparable results for NER, RE and Abnormals classification. Other biomedical pretraining objectives. EntityBERT (Lin et al., 2021) used entity masking pretraining objective to infuse domain knowledge. This pretraining objective can mask context in radiology reports, and hence we improved on this by using RadLex. Section segmentation is another radiology-specific pretraining objective proposed in BiRadsBERT (Kuling et al., 2021) which we have adapted to our work RadLing-SS. We plan to use fewer sections in keeping with our training dataset structure and test the architecture for our training data. Biomedical Knowledge Graph infusion. KeBioLM (Yuan et al., 2021) applies text-only encoding layer to aggregate entity representation. They use MLM, entity detection and entity linking as pretraining objectives and surpassed state-of-theart in biomedical NER and RE. BioKGLM(Fei et al., 2021) has compiled large biomedical knowledge graph as well as introduced a separate posttraining procedure between pretraining and finetuning where they experimented several strategies using knowledge embedding algorithms to inject domain knowledge. In contrast, our work RadLing ![2_image_0.png](2_image_0.png) attempts to use existing radiology domain-specific knowledge graph RadLex and pretraining objectives that are better suited to the structure of training data. ## 3 Methods Our methods consists of preprocessing, tokenization and continuous pretraining. In addition, we have outlined the downstream tasks we have used to evaluate RadLing models. An overview of our approach is presented in Figure 2. ## 3.1 Dataset We have 545K reports in our training dataset, with over 9.7M sentences and 87M words. The reports have been collected from various medical institutions in the United States and India. We have a high volume of reports for radiograph images taken for head and chest regions (see Appendix Figure 3). We have a high volume of CT and Xray modalities compared to MRI or Ultrasound (see Appendix Figure 5). We preprocessed the dataset using standard techniques, which we describe in more detail in the Appendix Section A.1. ## 3.2 Tokenization We extend existing vocabulary for ELECTRA so that meaningful tokens in radiology domain are added. We have modified AVocaDo (Hong et al., 2021) tokenisation to include only domain-specific words in the new vocabulary. We perform Wordpiece tokenization on our corpus to form the set of new tokens T*corpus*. The vocabulary of BERT contains set of tokens T*BERT* . We find the tokens that belong to Tcorpus − T*BERT* and call them T*candidate*. We query RadLex to check whether the tokens in T*candidate* represent any concept in RadLex. In case the word is present in RadLex, we append the token to the new set of tokens Tnew. Since the reports are anonymized, the names, dates and several other patient identifiers are replaced by fixed special tokens, such as '[date]', '[person]', '[location]', '[time]' and '[removed]'. We also include these sets of tokens Ts into our vocabulary. Our new vocabulary TRadLing = TBERT +Tnew +Ts. The addition of significant number of tokens to the existing vocabulary has been seen to introduce catastrophic forgetting or overfitting of model to the new tokens (Hong et al., 2021). We adopt the regularization method in AVocaDo during continuous pretraining to prevent these issues. Total loss during training is a combination of the loss of the pretraining objective and regularization term Lreg (See Appendix Section A.2). ## 3.3 Continuous Pretraining Objectives We train RadLing using ELECTRA-small (Clark et al., 2020). ELECTRA uses generator and discriminator networks to perform 'replaced token detection' (RTD) task. RTD is a pretraining task where the model learns to differentiate between real input tokens and plausible but artificially created replacements. The generator is a small MLM model that replaces some corrupted (masked) input tokens by sampling tokens from its vocabulary distribution. The discriminator (ELECTRA) is pretrained to predict whether each token is a replaced token or original token. The advantage of RTD is better language understanding with relatively small amount of pretraining data. In our work, we have experimented with three pretraining or self-supervised objectives: Masked language modeling, Section Segmentation, and Knowledge-aware masking. Masked Language Modeling (RadLing-MLM): In keeping with the original paper (Devlin et al., 2018), for every input sequence, we have randomly masked 15% of the tokens in the text. Section Segmentation (RadLing-SS): Section segmentation pretraining task follows (Kuling et al., 2021) and uses the discriminator to classify sentences belonging to one of the five report sections: CLINICAL SECTION, FINDINGS, IMPRESSIONS, COMPARISON and MISCELLANEOUS1. Knowledge-aware masking (RadLing-KG): Knowledge-aware masking utilizes RadLex to intelligently mask (Figure 4) so that context information is preserved. RadLex-KG uses four steps to achieve this: (a) Named entity extraction and entity linking, (b) Categorization of the entities, (c) Entity masking according to their categories, and (d) Regularization during training. From the 15 sub-classes of entities that make up RadLex, we have used the classes "anatomical entity", "clinical finding", "procedure", "imaging observation" and "RadLex descriptors" in this study. Each term in this class can optionally have 63 properties, among which we have only used Anatomical_site. Named entity extraction and entity linking: Most of the contextual information of a radiology report is contained in three sections: CLINICAL SECTION, FINDINGS and IMPRESSIONS. We detect spans of the named entities using SciSpacy (Neumann et al., 2019) in these sections of the report. Then, using the NCBO annotator tool2, we normalize the entities so that we can query RadLex and retrieve information about the ancestors and properties of these entities. Categorization of the entities: We classify the entities into three types: Symptom, **Anatomy** and Observation. Observations refer to different clinical findings in the patient. The normalized entities are classified to these categories using the following heuristics3: 1. Those that RadLex recognizes as "symptom" are classified as Symptom. 2. Those that RadLex recognizes as 'anatomical entity', 'anatomical descriptors' and 'anatomically-related descriptor' are classified as Anatomy.We also include 'location descriptor' class into this category, although they are merely positional qualifiers and don't represent anatomy. 3. Those that map to the following RadLex sections are classified as Observation: 'clinical finding', 'procedure', 'imaging observation', 'size descriptor', 'normality descriptor', 'turbidity descriptor', 'stage of healing descriptor' and 'composition descriptor'. Entity masking. We mask entity tokens based on their entity category. We randomly choose one of the following masking options for each sequence: 1. Mask entity tokens identified as Anatomy in the CLINICAL SECTION and FINDINGS of the report. Mask entity tokens identified as Symptom in FINDINGS and IMPRESSIONS. 2. Mask entity tokens identified as Observation in the FINDINGS of the report. Mask entity tokens identified as Symptom in FINDINGS and IMPRESSIONS. 3. Mask entity tokens identified as Anatomy in the CLINICAL SECTION and IMPRESSIONS of the report. Mask entity tokens identified as Symptom in FINDINGS and IMPRESSIONS. 4. Mask entity tokens identified as Observation in the IMPRESSIONS of the report. Mask entity tokens identified as Symptom in FINDINGS and IMPRESSIONS. Each anatomy in a radiology report is associated with a corresponding observation usually. These masking options ensure that context is not masked altogether in a sequence. In addition, if multiple words comprise an entity, we have randomly masked either all or a subset of the tokens comprising the entity. We only consider the masking option that will lead to masking at least 15 % of the total number of tokens. If no masking option meets that criteria, we randomly select one of the four options and mask the corresponding entities. The rest of the 15% quota is filled up by tokens corresponding to non-entities. Regularisation: In addition to the discriminator loss in ELECTRA, we introduce a novel regularisation loss where we decrease the penalty for generating a token that belongs to an Observation that pertains to the same Anatomy. This is done by checking the property *Anatomical_site* of both the linked entity and generated entity or verifying if they belong to the same subclass in 'body-systemspecific disorder' in RadLex. The regularisation loss is denoted by LKG, and is explained in the Appendix Section A.3.2. ## 4 Experimental Results 4.1 Finetuning Tasks We evaluated our pretrained models on four finetuning tasks: (a) Named Entity Recognition (NER): extracting radiology-specific anatomies and observations. (b) Relationship Extraction (RE): extracting relationships between anatomies and observations. | Model | NER-M NER-C RE-M RE-C Class RadQA | | | | | |------------------|-------------------------------------|------|------|------------|------------| | RadBERT | 0.91 | 0.90 | 0.97 | 0.93 | 0.99 69.09 | | PubMedBERT 0.86 | 0.89 | 0.78 | 0.69 | 0.98 60.08 | | | RadLing-MLM 0.89 | 0.89 | 0.98 | 0.94 | 0.98 60.96 | | | RadLing-SS | 0.89 | 0.88 | 0.96 | 0.94 | 0.97 60.23 | | RadLing-KG | 0.92 | 0.92 | 0.98 | 0.94 | 0.99 62.55 | Table 1: Downstream task Results: For Named Entity Recognition (NER), Relationship Extraction (RE) and Abnormal classification (Class) tasks, macro F1 is reported. For Radiology Question Answering (RadQA), F1 score is reported. NER-M=NERMIMIC, NER-C=NER-CheXpert, RE-M=RE-MIMIC, RE-C=RE-CheXpert. (c) Abnormal classification: classifying reports into normal and abnormal based on the presence or absence of pathologies, and (d) Radiology Question Answering: providing answers to questions based on radiology reports. We have used RadGraph dataset (Jain et al., 2021) for NER and RE tasks, RadQA dataset (Soni et al., 2022) for Radiology Question Answering, and (Demner-Fushman et al., 2016) for abnormal classification. The datasets for these tasks are explained in details in Appendix Section A.5. In this section, we present the results from all three RadLing models, which were each fine-tuned on downstream tasks after being pretrained on three different self-supervised objectives (see Section 3.3). NER. RadLing-MLM and RadLing-SS do comparatively well on RadGraph test dataset with reference to the state-of-the-art model RadBERT, with F1 scores of 0.89. RadLing-KG however does better than all the other models with F1 score of 0.92. Similar results are seen for CheXpert test dataset, and RadLing-KG performs better than other models with F1 of 0.92. The breakdown of F1 scores in table 2 show that RadLing-KG outperforms the other variants in all the classes, and has significant improvement in the underrepresented class *Observation:Uncertain*. Relation Extraction. Both RadLing-MLM and RadLing-KG outperform the state of the art in relationship extraction task, with F1 scores 0.98 and 0.94 on MIMIC and CheXpert test datasets, respectively. Meanwhile, RadLing-SS falls a little short with F1 of 0.96 for MIMIC test dataset. The dataset imbalance does not affect the performance of these models for the underrepresented class [3]. Radiologist benchmark macro F1s for this task are 0.91 and 0.704 for MIMIC-CXR and CheXpert respectively, and our models have surpassed it. Abnormal Classification. RadLing-KG matches the high performance of RadBERT in this task, with a macro F1 of 0.99, accuracy 99.3% and AUC of 0.995. Question Answering. For RadQA dataset, RadLing-KG performs the best among the RadLing model variants at F1 of 62.55 and Exact Match of 49.78, but this is lower than the state of the art (69.09 for RadBERT). ## 4.2 Discussion RadLing is one of the first models that has been trained on a radiology report dataset, and the first to use ELECTRA. RadLing-KG is the first radiology PLM to use RadLex in its training. For this reason, there are not many benchmarks we can compare our results with. Knowledge-aware masking has led to better results in almost all downstream tasks compared to RadBERT. However, RadLing-MLM has comparable performance, and we attribute the robustness to ELECTRA architecture. The choice of ELECTRA in our experiments is influenced by its unique architecture that has been shown to yield high performance with low data (Clark et al., 2020), making it perfect for industry setting. RadLingKG improves the performance in underrepresented classes for both NER and RE significantly, where all other models perform poorly. To provide an example, uncertain observations comprise only 4.7% of the NER training data. Now, for a sentence "mild basilar atelectasis without definite focal consolidation.", "focal consolidation" is an uncertain observation. Models other than RadLing-KG are not able to capture the whole text as an uncertain observation. Similarly, for relationship extraction task, 'suggestive of' reflects 4.7% of the training data. A sentence like "Findings are suggestive of mild pulmonary edema with basilar atelectasis" has two 'suggestive of' relations : 1. 'Findings' and 'edema', 2. 'Findings' and 'atelectasis' However, models other than RadLing-KG finetuned for relationship extraction are unable to detect both of these, especially the latter relationship. We surmise that knowledge infusion is a key factor in these stellar results. ELECTRA has been shown to have a lack of uniformity and alignment where two closely related sentences may have more different representations (Meng et al., 2021). We hypothesize that this might be one of the main factors contributing to low F1 scores for RadQA. However, we also note that RadLing-KG attempts to counteract this effect and improves on both Exact match and F1. ## 5 Conclusion In this work we have explored a cost-effective method to train a high performing radiology PLM, RadLing with a small dataset. RadLing models took 2 days to train on Tesla V100 SXM2 machines with 8 GPUs and 16 GB memory per GPU, which using larger models like ELECTRA-large required 5 days. We developed a knowledge-aware masking strategy to use RadLex to infuse context into radiology PLMs to train RadLing-KG. This led us to the following observations. First, ELECTRA architecture, without any special pretraining objectives, is able to produce good results with most of the downstream tasks. In a task like relationship extraction, it even outperforms Radgraph radiologist benchmarks. Second, RadLing-KG is the best performing RadLing variant, and outperforms the downstream task benchmarks in all the tasks except QA. Third, Domain specific vocabulary is helpful in better performance of models. In addition, in tasks that use cross attention like vision-language tasks or explainable AI, having unfragmented radiology tokens is helpful. For example, BERT fragments biomedical terms like 'Thalamus' into 'Tha', '\#\#lam', '\#\#us', thereby losing the domainspecific meaning, whereas in our work, due to domain specific tokenization, the word 'Thalamus' is retained. Fourth, Infusion of RadLex information counteracts ELECTRA limitations in QA dataset. Finally, we have tested the model on proprietary NER datasets and RadLing-KG has yielded 0.920.93 macro F1 on less represented anatomies like neck, while for the highly represented anatomies, F1 is as high as 0.98. This actually shows the potential of using RadLex in radiology pretraining. Thus, in a real world setting with high imbalances in datasets, RadLing-KG is more robust. In future we would like to explore more ways to infuse knowledge by (a) Using text description of context like (Yuan et al., 2021), (b) Retrieving context from biomedical knowledge graphs like SNOMED 4and UMLS, and (c) more robust knowledge embedding methods. We would like to experiment with larger datasets and models, and work with more downstream radiology applications. | MIMIC | CheXpert | | | | | | | | |-------------|------------|--------|-------|--------|------|--------|-------|--------| | Model | ANAT | OBS:DP | OBS:U | OBS:DA | ANAT | OBS:DP | OBS:U | OBS:DA | | RadLing-MLM | 0.97 | 0.94 | 0.75 | 0.98 | 0.97 | 0.98 | 0.72 | 0.95 | | RadLing-SS | 0.97 | 0.94 | 0.72 | 0.98 | 0.97 | 0.98 | 0.68 | 0.95 | | RadLing-KG | 0.98 | 0.95 | 0.8 | 0.98 | 0.98 | 0.98 | 0.81 | 0.96 | Table 2: Downstream Task Results: Named Entity Recognition (NER) on RadGraph (Jain et al., 2021). Macro F1 scores reported on two test datasets: MIMIC and CheXpert. ANAT refers to Anatomy, OBS refers to Observation, DP: Definitely Present, U: Uncertain, DA: Definitely Absent. | MIMIC | CheXpert | | | | | | |-------------|------------|------------|---------------|--------|------------|---------------| | Model | Modify | Located At | Suggestive Of | Modify | Located At | Suggestive Of | | RadLing-MLM | 0.99 | 0.98 | 0.96 | 0.96 | 0.93 | 0.92 | | RadLing-SS | 0.98 | 0.97 | 0.94 | 0.95 | 0.94 | 0.9 | | RadLing-KG | 0.99 | 0.98 | 0.97 | 0.96 | 0.95 | 0.92 | Table 3: Downstream Task Results: Relation Extraction on RadGraph macro F1 scores. Macro F1 scores reported on two test datasets: MIMIC and CheXpert for 3 relation types: Modify, Located At and Suggestive Of. ## Limitations There are a few limitations pertaining to the training data we used. Some of them are listed below. 1. RadLing has been trained on English reports only, and therefore will not work out of the box in a multilingual setting. 2. There is data imbalance with respect to imaging modalities and anatomies covered by our training data. For example, regions like extremities, neck, spine and shoulder are underrepresented in the dataset, and expected understanding of observations related to those regions may be limited. 3. There needs to be a study on the diversity of the patients and radiologist expertise represented in the data, and how it impacts the performance of the model for underrepresented communities. 4. Different radiologists (and radiology departments) have different preferences and styles of writing reports. In addition, clinical referrals sometimes dictate to what extent some details are documented the report e.g. the Clinical statement. There was no study on the consistency, uncertainty or information richness of the report. Asides from the training data, there may be space and time throughputs of the model which could make them unsuitable for at-the-edge applications with limited bandwidth. ## Ethics Statement The research performed in this paper adheres to the Association for Computing Machinery (ACM) Code of Ethical and Professional Conduct 5adopted by the Association for Computational Linguistics (ACL). Radiology reports have been collected from various medical institutions, anonymized and access-controlled to protect PHI information by our dedicated data handling team according to the US Health Insurance Portability Act (HIPAA). To prevent any harm caused due to errors in our model-generated outputs, our models are meant to be deployed in a human-in-theloop setting where the key information extracted by our models are reviewed by radiologists and physicians. ## Disclaimer The concepts and information presented in this paper/presentation are based on research results that are not commercially available. Future commercial availability cannot be guaranteed. ## Acknowledgements We extend our sincere gratitude to members in our team for their support to us. We would like to thank The Siemens Big Data team for procuring and guiding us with data handling and anonymization protocols. We would like to thank the various institutions that have provided us with data, including Mt. Sinai and Zwanger-Pesiri Radiology ## References Emily Alsentzer, John R Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. *arXiv preprint arXiv:1904.03323*. Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555. Zhihao Dai, Zhong Li, and Lianghao Han. 2021. Bonebert: A bert-based automated information extraction system of radiology reports for bone fracture detection and diagnosis. In *Advances in Intelligent* Data Analysis XIX: 19th International Symposium on Intelligent Data Analysis, IDA 2021, Porto, Portugal, April 26–28, 2021, Proceedings 19, pages 263–274. Springer. Dina Demner-Fushman, Marc D Kohli, Marc B Rosenman, Sonya E Shooshan, Laritza Rodriguez, Sameer Antani, George R Thoma, and Clement J McDonald. 2016. Preparing a collection of radiology examinations for distribution and retrieval. *Journal* of the American Medical Informatics Association, 23(2):304–310. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Hao Fei, Yafeng Ren, Yue Zhang, Donghong Ji, and Xiaohui Liang. 2021. Enriching contextualized language model from knowledge graph for biomedical information extraction. *Briefings in bioinformatics*, 22(3):bbaa110. Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019. Question answering is a format; when is it useful? *arXiv preprint* arXiv:1909.11291. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. *arXiv* preprint arXiv:2004.10964. Jimin Hong, Taehee Kim, Hyesu Lim, and Jaegul Choo. 2021. Avocado: Strategy for adapting vocabulary to downstream domain. *arXiv preprint* arXiv:2110.13434. Matthew Honnibal, Ines Montani, Sofie Van Landeghem, Adriane Boyd, et al. 2020. spacy: Industrialstrength natural language processing in python. Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. *arXiv preprint* arXiv:1904.05342. Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. 2019. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 590–597. Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P Lungren, Andrew Y Ng, et al. 2021. Radgraph: Extracting clinical entities and relations from radiology reports. arXiv preprint arXiv:2106.14463. Alistair Johnson, Lucas Bulgarelli, Tom Pollard, Steven Horng, Leo Anthony Celi, and Roger Mark. 2020. Mimic-iv. PhysioNet. Available online at: https://physionet. org/content/mimiciv/1.0/(accessed August 23, 2021). Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. *Scientific data*, 3(1):1–9. Charles E Kahn Jr, Curtis P Langlotz, Elizabeth S Burnside, John A Carrino, David S Channin, David M Hovsepian, and Daniel L Rubin. 2009. Toward best practices in radiology reporting. *Radiology*, 252(3):852–856. Sanjeev Kumar Karn, Ning Liu, Hinrich Schuetze, and Oladimeji Farri. 2022. Differentiable multi-agent actor-critic for multi-step radiology report summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1542–1553, Dublin, Ireland. Association for Computational Linguistics. Grey Kuling, Dr Curpen, Anne L Martel, et al. 2021. Birads bert & using section tokenization to understand radiology reports. *arXiv preprint arXiv:2110.07552*. Curtis P Langlotz. 2006. Radlex: a new method for indexing online educational materials. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Gongbo Liang, Halemane Ganesh, Dylan Steffe, Liangliang Liu, Nathan Jacobs, and Jie Zhang. 2022. Development of cnn models for the enteral feeding tube positioning assessment on a small scale data set. BMC Medical Imaging, 22(1):1–9. Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2021. Entitybert: Entity-centric masking strategy for model pretraining for the clinical domain. Association for Computational Linguistics (ACL). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. 2022. Biogpt: generative pre-trained transformer for biomedical text generation and mining. *Briefings* in Bioinformatics, 23(6). Yu Meng, Chenyan Xiong, Payal Bajaj, Paul Bennett, Jiawei Han, Xia Song, et al. 2021. Coco-lm: Correcting and contrasting text sequences for language model pretraining. *Advances in Neural Information* Processing Systems, 34:23102–23114. George Michalopoulos, Michal Malyska, Nicola Sahar, Alexander Wong, and Helen Chen. 2022. ICDBigBird: A contextual embedding model for ICD code classification. In *Proceedings of the 21st Workshop* on Biomedical Language Processing, pages 330–336, Dublin, Ireland. Association for Computational Linguistics. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: fast and robust models for biomedical natural language processing. arXiv preprint arXiv:1902.07669. Yifan Peng, Shankai Yan, and Zhiyong Lu. 2019. Transfer learning in biomedical natural language processing: an evaluation of bert and elmo on ten benchmarking datasets. *arXiv preprint arXiv:1906.05474*. Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Y Ng, and Matthew P Lungren. 2020. Chexbert: combining automatic labelers and expert annotations for accurate radiology report labeling using bert. *arXiv preprint arXiv:2004.09167*. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2020. Mpnet: Masked and permuted pretraining for language understanding. Advances in Neural Information Processing Systems, 33:16857– 16867. Sarvesh Soni, Meghana Gudala, Atieh Pajouhi, and Kirk Roberts. 2022. Radqa: A question answering dataset to improve comprehension of radiology reports. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6250–6259. An Yan, Julian McAuley, Xing Lu, Jiang Du, Eric Y Chang, Amilcare Gentili, and Chun-Nan Hsu. 2022. Radbert: Adapting transformer-based language models to radiology. *Radiology: Artificial Intelligence*, 4(4):e210258. Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, and Sheng Yu. 2022. Biobart: pretraining and evaluation of a biomedical generative language model. *arXiv preprint arXiv:2204.03905*. Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, and Fei Huang. 2021. Improving biomedical pretrained language models with knowledge. In *Proceedings of the 20th Workshop on Biomedical Language Processing*, pages 180–190, Online. Association for Computational Linguistics. ## A Appendix A.1 Preprocessing Preprocessing of the anonymized radiology reports corpus consists of the following tasks: (a) Regex-based cleaning and normalization: Some of the reports have been converted to text using optical character recognition (OCR). This led to common OCR errors like misspelled character substitutions and insertion of spurious characters. We used manual identification of common errors, followed by Regex-based substitutions for these errors. (b) Section identification: We follow a Regexbased method to split the text in a radiology report into the five sections. (c) Section-based chunking: BERT-like transformers can use maximum 512 tokens as sequence length (Michalopoulos et al., 2022). We made sure that an entire section of a report is in one section, and divided the report into chunks to fit this restriction. ## A.2 Avocado Tokenization In AVocaDo, a contrastive learning framework is employed, and the regularization term Lreg is calculated using the cosine similarity between the sentence encoding outputs from the general PLM and the PLM with a domain-adapted tokenizer as below: $${\mathcal{L}}_{r e g}(\mathbf{h}_{{\mathcal{A}}}^{(l)},\mathbf{h}_{{\mathcal{P}}}^{(l)})=$$ $$\frac{1}{B}\log\sum_{i=1}^{B}\frac{\exp^{(}\,s i m({\bf h}_{{\mathcal A},i}^{(l)},{\bf h}_{{\mathcal P},i}^{(l)})/\tau}{\sum_{j=1}^{B}\exp^{(}\,s i m({\bf h}_{{\mathcal A},i}^{(l)},{\bf h}_{{\mathcal P},j}^{(l)})/\tau},$$ $$\mathrm{(1)}$$ where h (l) A,i and h (l) P,i are l-th layer outputs from the general (P) and adapted (A) PLM encoders for each sentence xiin a batch of sentences x of size B. sim(·) refers to cosine similarity between the encodings, and τ is the softmax temperature. ![9_image_0.png](9_image_0.png) ## A.3.1 Masked Language Modeling. This objective has the separate losses for the generator and discriminator of ELECTRA , denoted by L MLM ( X, θ G ) and L Disc ( X, θ D ) respectively. They are accompanied by the regularisation term from 1 , and calculated as follows: where p G ; is the probability of generating a particular token x i given the masked token x masked D ( · ) is a sigmoid output of the discriminator that predicts whether the token is "real", λ A is a regularisation parameter, which is set to 1. ## A.3.2 Knowledge-Aware Masking The regularisation loss for knowledge-aware masking is denoted by L KG , and calculated as follows: ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) $${\mathcal{L}}_{K G}(x,\theta_{D})=$$ $$\mathbb{E}\Big{(}\sum_{t=1}^{n}-\mathbb{I}\left(\mathcal{P}_{\mathcal{A}}(x_{t}^{corrupt})\in\mathcal{P}_{\mathcal{A}}(x_{t})\right)\tag{3}$$ $$\log D(x^{corrupt},t)$$ $$-\mathbb{I}\left(\mathcal{P}_{\mathcal{A}}(x_{t}^{corrupt})\notin\mathcal{P}_{\mathcal{A}}(x_{t})\right)$$ $$\log(1-D(x^{corrupt},t))\Big{)},$$ where PA(·) stands for the anatomical site property of the observation. The final loss for the discriminator is calculated as $${\mathcal{L}}_{d i s c}^{K G}={\mathcal{L}}_{D i s c}+\lambda_{K G}{\mathcal{L}}_{K G}$$ disc = L*Disc* + λKGLKG (4) where λKG is the knowledge graph regularisation parameter, and is set to 1 for our experiments. ## A.4 Experimental Setup We have trained RadLing using Tesla V100 SXM2 machines, with 8 GPUs and 16 GB memory per GPU. The base model is ELECTRA-small which has 14M parameters, 12 layers and 256 hidden size. RadLing-MLM was trained in 230 steps, using learning rate 3e-5, AdamW optimizer and polynomial decay schedule with warmup. RadLing-SS needed 220K steps while RadLing-KG was trained in 235K steps. The best performance for NER was achieved after 6 training epochs with learning rate 4e-5, and AdamW optimizer with RadLing-MLM, after 13 steps with RadLing-SS and 9 steps with RadLingKG. Relationship extraction model was finetuned on RadLing-MLM for 8 epochs with 5e-5 learning rate and AdamW optimizer; 14 epochs with RadLing-SS and 9 with RadLing-KG. Abnormal classification took 19 steps with RadLing-MLM, 10 steps with both RadLing-SS and RadLing-KG with 2e-4 learning rate, dropout 0.2 and AdamW optimizer with 1e-7 epsilon. RadQA finetuning took 8 epochs with 2e-4 learning rate, maximum sequence length 384, document stride 128, maximum query length 128. All of these models have been trained with early stopping and patience of 3 epochs, and best model selected based on validation loss. For comparison of the results we have used RadBERTRoBERTa-4m for RadBERT, and the finetuning follows that of RadLing. PubMedBERT results are collected for NER and RE from (Jain et al., 2021). Finetunining for RadQA and Abnormal classification follows finetuning for RadLing and RadBERT. ## A.5 Fine-Tuning Task Details The different finetuning tasks in this paper are described in this section. Named Entity Recognition. The named entity recognition task that we chose to finetune our model on focuses on extracting the anatomy and observations information from the unstructured radiology reports. RadGraph (Jain et al., 2021) dataset, annotated by board-certified radiologists, is a collection of annotated training data from MIMICCXR dataset and test data from both MIMIC-CXR and CheXpert(Irvin et al., 2019) dataset. The anatomies in this schema consists of two concepts: Anatomy and Observation. Anatomy refers to body parts referenced in the radiology reports. Observations refer to the visual findings noted by the radiologist in the medical images (e.g., nodules or opacities), and pathophysiological processes (diseases) that the radiologist has mentioned (e.g., pneumonia). Observation is further divided into three uncertainty levels: definitely present, uncertain and definitely absent. Training and validation datasets are created by annotating 500 radiology reports from MIMIC-CXR datasets, while test dataset comprises 50 annotated radiology reports from MIMIC-CXR and CheXpert collections. There are 12,388 and 2191 entities in the training and validation datasets respectively. In the test set, there are 1293 and 1473 entities in MIMIC-CXR and CheXpert respectively. Overall, we have an imbalanced dataset with less than 5% uncertain observations. The evaluation criteria is class-level and aggregate macro F1 and micro F1. Radiologist benchmark macro F1s for this task are 0.981 and 0.894 for MIMIC-CXR and CheXpert respectively. Relationship extraction. RadGraph dataset also contains relationships between the annotated entities. Relationship extraction refers to the classification of the entity–entity relations given the context of the report. The schema contains three possible relations: 'Suggestive Of', 'Located At', and 'Modify'. Suggestive Of is a relation between two observations that indicate that the first observation indicates the likelihood or leads to the second one. For example, Streaky densities at the lung base might suggest pneumonia' means the observation 'streaky densities' at the lung base indicates that the patient might have pneumonia. Located At is a relation between an observation and anatomy. In the above example, 'streaky densities' are 'Located At' the anatomy 'lung' base. Modify is a relation between two anatomies or two observations where the first entity is a qualitative or quantitative descriptor of the second entity. There are 9251 relations in the training dataset, 1638 in the validation dataset, 902 in MIMIC-CXR test dataset and 1107 in CheXpert dataset. This dataset is imbalanced with < 6% of relations being 'Suggestive of'. The evaluation metrics for this dataset are macro F1 and micro F1, both on an aggregate level and for each relationship class. Radiologist benchmark macro F1s for this task are 0.91 and 0.704 for MIMIC-CXR and CheXpert respectively. Abnormal classification. This dataset has been collected from 2 large hospital systems within the Indiana Network for Patient Care database. It contains narrative chest x-ray reports for posterior–anterior (PA) chest x-ray examinations. The dataset(Demner-Fushman et al., 2016) contains 3996 de-identified reports, manually and independently annotated by two coders trained in medical informatics. Acute or chronic disease findings, implanted medical devices, or surgical instruments are classified as not normal in this dataset. This dataset also coded in and normalised the abnormalities present in the reports. There are 2564 abnormal reports and the rest are normal. The evaluation metrics for this dataset are macro- and micro-F1 on test dataset. Radiology Question Answering. This downstream task is an extractive Question Answering application in radiology domain. We use RadQA dataset (Soni et al., 2022) made from 1009 reports sourced from MIMIC-III- (Johnson et al., 2016) database, by sampling 100 patients with 1–36 radiology reports. Question creation for this dataset follows a novel approach of basing questions only on clinical referrals of physicians that prompted the radiography being done instead of the whole report. This focuses on the questions that the physicians are the most interested in, as well as the answers to which would most likely be contained in the report. The answer annotations are done by annotators with the full radiology report, with them needing to annotate from Findings and Impression sections. Annotations have been carried out using haystack by expert annotators. There are 6148 questions in the dataset, among which 1745 are unanswerable. The rest of the answers are extractive and have median and average lengths of 7 and 16.2 respectively. The evaluation of this dataset uses standard Machine Reading Comprehension (MRC) (Gardner et al., 2019) metrics, i.e., stricter metric Exact match (EM) and F1 where word level match is calculated between reference and predicted answers.
manderscheid-lee-2023-predicting
Predicting Customer Satisfaction with Soft Labels for Ordinal Classification
https://aclanthology.org/2023.acl-industry.62
In a typical call center, only up to 8{\%} of callersleave a Customer Satisfaction (CSAT) surveyresponse at the end of the call, and these tend tobe customers with strongly positive or negativeexperiences. To manage this data sparsity andresponse bias, we outline a predictive CSATdeep learning algorithm that infers CSAT onthe 1-5 scale on inbound calls to the call centerwith minimal latency. The key metric to maximize is the precision for CSAT = 1 (lowestCSAT). We maximize this metric in two ways. First, reframing the problemas a binary class, rather than five-class problem during model fine-tuning, and then mapping binary outcomes back to five classes usingtemperature-scaled model probabilities. Second, using soft labels to represent the classes. Theresult is a production model able to support keycustomer workflows with high accuracy overmillions of calls a month.
## Predicting Customer Satisfaction With Soft Labels For Ordinal Classification Etienne Manderscheid, Matthias Lee Dialpad Canada Inc. 1100 Melville St \#400 Vancouver, BC, Canada, V6E 4A6 {etienne,matthias.lee}@dialpad.com ## Abstract In a typical call center, only up to 8% of callers leave a Customer Satisfaction (CSAT) survey response at the end of the call, and these tend to be customers with strongly positive or negative experiences. To manage this data sparsity and response bias, we outline a predictive CSAT deep learning algorithm that infers CSAT on the 1-5 scale on inbound calls to the call center with minimal latency. The key metric to maximize is the precision for CSAT = 1 (lowest CSAT). We maximize this metric in two ways. First, reframing the problem as a binary class, rather than five-class problem during model fine-tuning, and then mapping binary outcomes back to five classes using temperature-scaled model probabilities. Second, using soft labels to represent the classes. The result is a production model that supports key customer workflows with high accuracy over millions of calls a month. ## 1 Introduction 1.1 Motivation Call centers have been using CSAT surveys to measure Customer Satisfaction for decades. Like most CSAT surveys, those our company provides are delivered either on the line at the end of a call or in response to an SMS message. In the survey, the customers are asked to rate their customer service experience on a 1-5 scale, with 1 being very dissatisfied and 5 being very satisfied, respectively. However, we found that for a typical call center, only up to 8% of callers leave a CSAT survey response1. Since only a small fraction of customers leave a survey response, managers and coaches of traditional call centers are missing important information. Specifically: 1. The mean CSAT score suffers from response bias, as customers with a strongly positive or 1for the 691 call centers in our dataset with at least 50 CSAT survey responses, the 10th and 90th percentile of survey response rates were 0.3% and 8%, respectively. negative experience are far more likely to take the time to respond (Table 1). 2. When the customer has a sub-optimal experience but does not leave any feedback, the call center may not be able to proactively take necessary actions in a timely manner to improve customer experience. To address these issues, we have developed and present here an algorithm that infers CSAT scores on call center calls with high accuracy and low latency. At the moment of writing, our predictive CSAT feature is fully deployed at scale and has rated over 50 million calls. ## 1.2 Intended Uses Of Predictive Csat At our company, predictive CSAT is used for coaching purposes, to maintain and improve overall customer experience and to create new opportunities for analytics. In "Coaching Hub" we provide coaches with material for both recognition of agents and improvement in the form of two lists, with calls rated with predicted CSAT scores of 5 and 1 respectively. Therefore, the precision of classes 5 and 1 is critical - it's necessary that the calls in these lists are reliably satisfied and dissatisfied, respectively. Since satisfied calls outnumber dissatisfied calls by a wide margin, the precision of class 1 has long been our limiting factor and therefore our primary focus. Maintaining and improving overall customer experience is crucial for our users. Users such as call center managers, coaches, and agents use predicted CSAT to proactively identify dissatisfied customers moments after the call ends by reviewing calls with predicted CSAT scores of 1 or optionally 2. This enables users to follow up with customers and potentially save their accounts. Predicting CSAT also creates new opportunities for analytics. We examine which factors are most 652 associated with dissatisfied calls, where the predicted CSAT score is 1 or 2. For example, we might report that calls associated with hold times longer than 10 minutes are associated with a higher percentage of dissatisfied calls in a call center. In this way, we offer data-driven recommendations to improve CSAT for this call center. In interviews with our target users (call center coaches and managers), we learned that a certain amount of error tolerance is acceptable when inferring CSAT scores. Consequently for these 3 applications, it's acceptable if the model predicts a 1 and the customer left a survey response of 2. This motivates our introduction in the System Overview section of precision* and recall* which are metrics with an error tolerance of 1. ## 1.3 Constraints Our solution space is constrained in the following ways: - High precision of class 1: As discussed above in section 1.2, this metric is our primary focus, being central to Coaching Hub's lists of coachable calls and being the minority class relative to class 5. Thus, it is important for our model to have high precision in predicting calls for the lowest CSAT class. - Latency: The predicted CSAT score is included in a call summary that is shown to the user 10 seconds after the call, and availability within 10 seconds at least 99.9% of the time is a hard requirement for all features displayed in the summary. This holds even if the transcript is many thousands of words long. Since we deploy this model on CPU to control cost and availability, this was non-trivial to achieve. The typical, out-of-the-box deep learning solution for solving multiclass classification problems is to allow each distinct label as a possible output of the neural network, and train using a loss function such as cross-entropy loss over the set of all labels. As a shorthand, we refer to this approach here as "5-way classification". However, this approach does not lend itself well to meeting constraints 1 and especially 2. The main contribution of this paper is to present a combination of two techniques, which were adapted for this problem to solve both constraints: ![1_image_0.png](1_image_0.png) 1. **Binary classification + fan-out:** We reframe the 5-class prediction problem as a binary classification task during model training and then map temperature-scaled model probabilities back from 2 to 5 classes during inference time ('fan out'). 2. **Soft labels:** We introduce a modified label smoothing approach that achieves superior accuracy for this ordinal classification task. ## 2 Related Work There are few research studies on predicting customer satisfaction (CSAT) scores on contact center conversations using transcripts generated by an Automatic Speech Recognition (ASR) model. In Bockhorst et al., 2017, they developed a system that not only utilizes call transcripts transcribed by an ASR model but also other non-textual data such as call duration, queue, in-queue waiting times, utterance level sentiment scores, and various customer data. Overall, there are 5,501 features in the training dataset. The author's model is trained to predict a metric called Representative Satisfaction Index (RSI) which is the average of four different survey scores. In the end, their framework involves two models, namely a rank scoring and an isotonic regression model. In a more recent study, Auguste et al., 2019 used the Net Promoter Score (NPS) to predict customer satisfaction on chat conversations. A promoter score can be defined as a rating that customers give to indicate how likely they are to promote a company. Out of a scale of 0 to 10, customers with ratings of 9 or 10 are considered promoters whilst those with ratings of 0 to 6 are considered detractors. NPS is calculated as the difference between promoters and detractors and companies want this metric to be positive and as high as possible. They compared macro F1 scores across different classification methods and their best method yielded a macro F1 score of 53.8%, which they noted that is a rather limited performance. Other studies that looked at predicting CSAT on contact center conversations proposed using information extracted from raw audio signals such as acoustic, emotions, and prosodic features. (Park and Gates, 2009; Zweig et al., 2006; Vaudable and Devillers, 2012; Devillers et al., 2010) Contact center managers are usually interested in picking out calls with a low CSAT score for either coaching purposes or for identifying opportunities to take meaningful interventions in a timely manner to improve customer experience. Hence, it's important to identify these calls with a relatively high degree of precision. Label smoothing is a regularization technique introduced by Szegedy et al., 2016 that has been successfully used to improve accuracy of the Inception architecture on the ImageNet dataset. In Müller et al., 2019, it is noted that label smoothing has been adopted in training procedures of other state-of-the-art image classification models (Zoph et al., 2017; Real et al., 2018; Huang et al., 2018). In another domain such as speech recognition, Chorowski and Jaitly, 2016 used label smoothing to reduce word error rate on the WSJ dataset. Additionally in machine translation, Vaswani et al., 2017 was able to slightly improve the BLEU score ## 3 System Overview 3.1 Dataset We only used the call transcripts as input to the model. The transcripts are produced by our company's proprietary Automatic Speech Recognition (ASR) models. This simplifies model deployment and helps latency as the model can be run as soon as a transcript is available, without waiting for any additional features, and it was sufficient to obtain high accuracy. We then preprocessed transcripts of contact center conversations to create training, validation and test sets. The labels collected were CSAT survey responses left by customers. Labels were aggregated into a single dataset rather than many separate company-specific datasets. Surveys were either run at the end of the call ("please stay on the line for a brief survey. . . ") or sent to customers as an SMS message. Survey responses have a customer satisfaction (CSAT) rating of 1-5, where 5 is the highest satisfaction. Table 1 shows the distribution of CSAT customer ratings over our dataset. Additionally, we excluded callers that were present in the training set from the validation and test sets2to prevent contamination of these sets 3. ## 3.2 Model Fine-Tuning We used the Big bird4 model hosted on Huggingface5(Wolf et al., 2020) for all experiments. We chose Big bird (Zaheer et al., 2020) as our model architecture because it is a transformer-based model capable of handling long sequences (up to 4096 tokens) with low latency in our production environment. Specifically, its memory requirements scale linearly in the number of tokens rather than quadratically as many transformers-based models do. If transcripts exceeded 1536 tokens6in length, only the last 1536 tokens of the conversation were used and the preceding were discarded; this occurred in 16% of transcripts. This allowed us to keep latency and cost under control at inference time. We trained all models using cross-entropy loss and a learning rate of 10e−5 with early stopping. The metric we chose to evaluate the checkpoints is motivated by the user experience around CSAT, as detailed in section 1.2. As a result, the precision of class 1 is more important than the precision of other classes, or than recall, and an error tolerance of 1 is acceptable for our intended use cases. Therefore, we introduce the metric "precision*", i.e., the precision with an error tolerance of 1. We also formally define a modified version of true positive and false positive (denoted tp* and fp* respectively)7 which is necessary in defining precision*. TP∗ c = |CSATs ∈ (c ± 1) ∧ CSATp = c| (1) FP∗ c = |CSATs ∈/ (c ± 1) ∧ CSATp = c| (2) FNc = |CSATs = c ∧ CSATp ̸= c| (3) Precision∗ =TP∗ TP∗ + FP∗(4) Recall∗ =TP∗ TP∗ + FN(5) For the purposes of picking the best model checkpoint for any experiment, we measure the F-beta metric with β = 0.5 on class 1 with a tolerance of 1 and define it as follows: $$F_{\beta}^{*}={\frac{(1+\beta^{2})\times\mathrm{Precision}^{*}\times\mathrm{Recall}^{*}}{(\beta^{2}\times\mathrm{Precision}^{*})+\mathrm{Recall}^{*}}}$$ $$(6)$$ Using beta = 0.5 achieves our goal of weighing both precision and recall while giving precision more importance than recall. ## 3.3 Binary Classification + Fan-Out First, we mapped the CSAT labels to the 0-1 range. For example, when using hard labels, the CSAT label vector [1, 2, 3, 4, 5] is remapped to [0, 0, 0, 1, 1]. Since the remapped vector contains 2 classes, we can train a binary classifier. At inference time, we first rescale the fine-tuned model output, logits with temperature scaling. Temperature scaling simply divides the logits by a single parameter that is fitted on a held-out validation set so the model probabilities are better calibrated (Guo et al., 2017). Typically, for a classification task, the logits from a model are passed through a softmax function to get final class probabilities. Instead, we use the low-CSAT class probability to "fan-out", i.e. we map this probability ∈ [0, 1] back into 5 classes using 4 class thresholds. For example, if the class 5 threshold is 0.15, then a model probability (of low CSAT) of 0.01 corresponds to a 5, while a model probability of (of low CSAT) 0.16 maps to a 4. We illustrate this fine-tuning and inference method in Figure 1. The algorithm we devised to infer the class thresholds is the following. As explained previously, class 1 F∗ 0.5 is our primary metric. Thus we set the class 1 threshold first in a way to optimize it. Specifically, we use a loop to search the parameter space of class 1 threshold values and pick the one that optimizes class 1 F∗ 0.5 on the validation set. Then we repeat this process with the thresholds that separate classes 5-4, 2-3, and 4-3 (the priority is determined by user workflows) to optimize class 5 F∗ 0.5 , class 2 F∗ 0.5 and then class 4 F∗ 0.5 . Once the 4 thresholds are set, the 5 classes are defined by them. Here's a real example of threshold values: [0.92, 0.45, 0.09, 0.03], and these separate classes 1-2, 2-3, 3-4 and 4-5 respectively. ## 3.4 Soft Labels The traditional label smoothing equation is $$y_{k}^{L S}=y_{k}(1-\alpha)+\alpha/K\qquad\qquad(7)$$ where K is the number of label classes, yk is a one-hot encoded label vector and α is the hyperparameter that determines the amount of smoothing (Müller et al., 2019). Our motivation for trying label smoothing is the ordinal nature of CSAT classes. That is, a call with a survey response of '2' is a low CSAT call, but not as strongly as a '1', and more strongly so than a '3'. So when using binary classification as detailed above, it's natural to try label smoothing with a higher level of smoothing for the center classes. We also refer to labels that have been smoothed as "soft" labels. In Table 2 we show the values of α we used for different classes, reserving the weakest α for the outermost classes (1,5) and the strongest α for the center class (3). Aside from using these multiple values of α , we implemented label smoothing to train the model in the standard way. $\dfrac{\text{bels}}{|}$ ! | CSAT Class | α | Soft Labels | |--------------|------|---------------| | 1 | 0.02 | 0.99 | | 2 | 0.2 | 0.90 | | 3 | 1 | 0.5 | | 4 | 0.2 | 0.1 | | 5 | 0.02 | 0.01 | ## 3.5 Experiments We conducted a total of 24 experiments, each exploring a different permutation of the experimental conditions. The conditions are shown in Table 3. This setup allowed us to explore multiple experimental conditions while generating variability for statistical analysis but limiting the number, cost and carbon footprint of our experiments. Model Finetuning ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ![4_image_2.png](4_image_2.png) ## 4 Results And Discussion 4.1 Type Of Classification On precision and "precision*" of classes 1 and 5, soft labels binary classification performed best ( figure 2 and table 4. All 8 t-tests of binary soft labels vs the other two have p-values < 0.05). Furthermore, within the binary classification + fan out approach, soft labels worked better than hard labels on almost every metric. It had higher precision on every class (most important for our users), and better on recall for 35 classes. 5-way classification had the highest precision on center classes by a wide margin. It also produces the strongest recall for classes 1 and 5, probably because the binary classification approaches optimize for precision of these 2 classes the most. In terms of overall accuracy (with a tolerance of 1), binary soft labels and 5-way classification were statistically tied (90.4% vs 90.6% respectively), with binary hard labels trailing slightly (89.2%). ## 4.2 Conclusion In this paper, we propose an approach to maximize the precision of certain classes in the context of an ordinal classification problem. We show that for our application it makes sense to cast the problem first as binary classification and restore the 5 output classes using probability thresholds. We also show that the use of soft labels outperforms that of hard labels in our setup. This approach can benefit applications where ratings can be formulated as ordinal classes and where some classes are emphasized over others in the primary user workflows. We also show that the problem of CSAT prediction is amenable to modern deep learning techniques with high accuracy using the transcript as the sole input to the model. ## 5 Limitations - We use only the transcript as input to the model. This implies the model wouldn't know that a hold was long unless the customer said "that was a long hold" or something to that effect. The transcript usually contains language indicating the hold is taking place "may i place you on hold?", "thanks for holding", etc, but rarely indicates the exact duration of the hold. Similarly, the model doesn't know the wait time unless the customer complains explicitly about it. | Metrics | Binary Hard labels | Binary Soft labels | 5-way-classification | |---------------------|----------------------|----------------------|------------------------| | Class 1 Precision* | 77.9% | 83.4% | 78.0% | | Class 2 Precision * | 57.3 | 63.8 | 82.9% | | Class 3 Precision * | 23.7 | 35.9 | 79.7% | | Class 4 Precision * | 78.5 | 80.3% | 94.9% | | Class 5 Precision * | 94.5% | 95.3% | 92.5% | | Class 1 Precision | 68.5% | 74.2% | 69.5% | | Class 2 Precision | 5.8% | 5.6% | 19.4% | | Class 3 Precision | 6.7% | 11.7% | 50.5% | | Class 4 Precision | 12.4% | 14.1% | 55.0% | | Class 5 Precision | 88.5% | 89.7% | 86.7% | | Class 1 Recall | 58.0% | 55.5% | 65.2% | | Class 2 Recall | 7.7% | 10.7% | 3.1% | | Class 3 Recall | 6.4% | 13.2% | 14.4% | | Class 4 Recall | 9.2% | 15.4% | 11.9% | | Class 5 Recall | 91.7% | 90.7% | 96.9% | ![5_image_0.png](5_image_0.png) - Predicted CSAT is available 10 seconds after the end of the call. New applications become possible if predicted CSAT is made available continuously throughout the call since a manager would be able to "whisper" (advise the agent on the call without the customer hearing), message or barge (jump into the call as a 3rd party). An important concern if predicted CSAT is computed repeatedly will be managing the cost and carbon footprint, possibly by using a small model as an initial gating function. - If a call center doesn't collect CSAT surveys through our company, their accuracy will be impacted as they won't be reflected in the training or test set. We ensure customers understand this by training our agents to explain it and including it in help center documentation. ## 6 Ethics Statement - We have read and abide by the ACL Code of Ethics 8. - **Data Privacy**: We follow the data privacy measures in place at our company which include scrubbing personal identifiable information (PII) from customer data and restricting our use of customer data to improvements to the services we provide them. We did not rely on any external annotations. - **Intended Use by Customers**: In the product we highlight both high and low CSAT calls for review by a supervisor to ensure employees receive a mix of positive and constructive feedback. Since supervisors review calls, they can adjust incorrect classifications produced by the model. 8https://www.aclweb.org/portal/content/acl-code-ethics - **Potential bias:** We sample subpopulations of users and their customers and evaluate internally to ensure the model outputs are not biased against specific groups. - **Carbon Footprint:** We minimized the carbon footprint of our experiments while meeting the need for variability required by statistical analysis. We achieved this by running 24 experiments, each with different experimental conditions, rather than running multiple experiments with different random seeds within each of the 24 conditions. In total the experiments described in this paper represented less than 500 hours of computation on a single V100 GPU. ## 7 Acknowledgements We thank personA for setting up some data tables used to generate some of the analyses presented here, and personB for his idea to use CSAT classes 2, 3 and 4 for lower confidence predictions. We also thank personC and personD for advice on preprocessing and model training, and personE for help reviewing this paper. ## References Jeremy Auguste, Delphine Charlet, Geraldine Damnati, Frederic Bechet, and Benoit Favre. 2019. Can we predict self-reported customer satisfaction from interactions? pages 7385–7389. Joseph Bockhorst, Shi Yu, Luisa Polania, and Glenn Fung. 2017. Predicting self-reported customer satisfaction of interactions with a corporate call center. In Machine Learning and Knowledge Discovery in Databases, pages 179–190, Cham. Springer International Publishing. Jan Chorowski and Navdeep Jaitly. 2016. Towards better decoding and language model integration in sequence to sequence models. *CoRR*, abs/1612.02695. Laurence Devillers, Christophe Vaudable, and Clément Chastagnol. 2010. Real-life emotion-related states detection in call centers: a cross-corpora study. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 2350–2353. ISCA. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. *CoRR*, abs/1706.04599. Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng Chen. 2018. Gpipe: Efficient training of giant neural networks using pipeline parallelism. CoRR, abs/1811.06965. Rafael Müller, Simon Kornblith, and Geoffrey E. Hinton. 2019. When does label smoothing help? *CoRR*, abs/1906.02629. Youngja Park and Stephen C. Gates. 2009. Towards real-time measurement of customer satisfaction using automatically generated call transcripts. In *Proceedings of the 18th ACM Conference on Information and Knowledge Management*, CIKM '09, page 1387–1396, New York, NY, USA. Association for Computing Machinery. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2018. Regularized evolution for image classifier architecture search. *CoRR*, abs/1802.01548. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *CoRR*, abs/1706.03762. Christophe Vaudable and Laurence Devillers. 2012. Negative emotions detection as an indicator of dialogs quality in call centers. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5109–5112. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. *CoRR*, abs/2007.14062. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2017. Learning transferable architectures for scalable image recognition. *CoRR*, abs/1707.07012. G. Zweig, O. Siohan, G. Saon, B. Ramabhadran, D. Povey, L. Mangu, and B. Kingsbury. 2006. Automated quality monitoring for call centers using speech and NLP technologies. In *Proceedings of* 1 NAACL, Companion Volume: Demonstrations , pages 292–295, New York City, USA. Association for Com-
wang-etal-2023-accurate
Accurate Training of Web-based Question Answering Systems with Feedback from Ranked Users
https://aclanthology.org/2023.acl-industry.63
Recent work has shown that large-scale annotated datasets are essential for training state-of-the-art Question Answering (QA) models. Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback data collected from deployed QA systems. This data can be collected from tens of millions of user with no additional cost, for real-world QA services, e.g., Alexa, Google Home, and etc. The main drawback is the noise affecting feedback on individual examples. Recent literature on QA systems has shown the benefit of training models even with noisy feedback. However, these studies have multiple limitations: (i) they used uniform random noise to simulate feedback responses, which is typically an unrealistic approximation as noise follows specific patterns, depending on target examples and users; and (ii) they do not show how to aggregate feedback for improving training signals. In this paper, we first collect a large scale (16M) QA dataset with real feedback sampled from the QA traffic of a popular Virtual Assistant.Second, we use this data to develop two strategies for filtering unreliable users and thus de-noise feedback: (i) ranking users with an automatic classifier, and (ii) aggregating feedback over similar instances and comparing users between each other. Finally, we train QA models on our filtered feedback data, showing a significant improvement over the state of the art.
# Accurate Training Of Web-Based Question Answering Systems With Feedback From Ranked Users Liang Wang∗ Boston University Department of Mathematics and Statistics leonwang@bu.edu ## Abstract Ivano Lauriola Amazon Alexa lauivano@amazon.com Alessandro Moschitti Amazon Alexa amosch@amazon.com Recent work has shown that large-scale annotated datasets are essential for training state-ofthe-art Question Answering (QA) models. Unfortunately, creating this data is expensive and requires a huge amount of annotation work. An alternative and cheaper source of supervision is given by feedback data collected from deployed QA systems. This data can be collected from tens of millions of user with no additional cost, for real-world QA services, e.g., Alexa, Google Home, and etc. The main drawback is the noise affecting feedback on individual examples. Recent literature on QA systems has shown the benefit of training models even with noisy feedback. However, these studies have multiple limitations: (i) they used uniform random noise to simulate feedback responses, which is typically an unrealistic approximation as noise follows specific patterns, depending on target examples and users; and (ii) they do not show how to aggregate feedback for improving training signals. In this paper, we first collect a large scale (16M) QA dataset with real feedback sampled from the QA traffic of a popular Virtual Assistant. Second, we use this data to develop two strategies for filtering unreliable users and thus de-noise feedback: (i) ranking users with an automatic classifier, and (ii) aggregating feedback over similar instances and comparing users between each other. Finally, we train QA models on our filtered feedback data, showing a significant improvement over the state of the art. ## 1 Introduction Large pre-trained language models, e.g., based on The Transformer neural network (Lin et al., 2022), have recently improved Natural Language Processing and Information Retrieval in several tasks, e.g., document classification (Chaudhary et al., 2020), Question Answering (QA) (Garg et al., 2020), neural retrieval (Karpukhin et al., 2020). Transformer models can be conveniently pretrained on large-scale unlabeled web data through general task-agnostic unsupervised objectives, e.g., Masked Language Model (MLM) and Next Sentence Prediction (NSP), making the networks able to be easily specialized on various downstream tasks (Devlin et al., 2019). However, they still require labeled training data (expensive to produce) to be adapted on the target domain. Recent research in QA has shown that the more data is used for fine-tuning the models, the better is the final performance (Huber et al., 2022). For instance, Garg et al. (2020) showed and measured the benefits of using large-scale labeled web datasets, Google NQ (Kwiatkowski et al., 2019), for training their answer selection ranker. The authors divided the fine-tuning stage into two steps: transfer and adapt (TANDA). In the first step, the pre-trained Transformer is tuned on general out-of-domain largescale QA data. Then, the resulting model is further trained on the target domain. However, building large-scale annotated resources is costly in terms of expert annotator work and annotation time. To reduce costs, various strategies to generate cheap training data have been recently explored, including data augmentation (Pappas et al., 2022; Riabi et al., 2021), distant supervision (Lin et al., 2018; Zhao et al., 2021), and active learning (Kratzwald et al., 2020). A rather different approach is based on the availability of feedback data, i.e., the QA system output (typically an answer) is evaluated by users. These question/answer (q/a) pairs can be used for further improving the training of QA systems (Li et al., 2022; Campos et al., 2020). User feedback can, thus, be used to build large and cheap training data, especially when the QA system constitutes the back bone of commercial applications such as virtual assistants, e.g., Google Home, Alexa, Siri, to which million questions are asked every day. Unfortunately, feedback data is affected by noise, i.e., the ∗Work done during an internship at Amazon Alexa. 660 individual feedback over a q/a pair has high probability to be incorrect. Indeed, users are not expert annotators, and they often provide judgments with lack of knowledge, subjectivity, and specific preferences. For instance, some users may return a positive feedback to a wrong answer that sounds funny. Therefore, feedback data may be ineffective for further training accurate models: in some cases, it can even degrade the accuracy of models trained on small datasets labelled by expert annotators. How to obtain effective training data labelled with feedback is an interesting open problem. For example, Rebbapragada and Brodley (2007) and Sun et al. (2007) provide weights to the training instances according to a mislabeling probability. However, to the best of our knowledge, previous work used artificially generated datasets, e.g., by adding random uniform noise to labeled corpora (Campos et al., 2020). This does not represent the characteristics of real users' generated traffic, as the noise distributes differently with respect to different questions (according to categories, semantics, pragmatics, trends, etc.). Users have indeed different attitudes and behaviors, and they may interact differently with the responses of a QA system. Approximating the error probability distribution of feedback is a main challenge, preventing weighting methods to be effective. To study this problem, we first collect and analyze a large-scale users' feedback dataset (16M q/a pairs) sampled from the traffic of a popular virtual assistant. Then, we propose two effective solutions to reduce the label noise and improve training performance: (i) we use an automatic classifier, trained on off-the-shelf answer selection data to automatically grade the reliability of users. Then, we select training examples using the most reliable users. (ii) We cluster together similar questions so that we compare the answer from different users and again rank them based on how much they are close to the majority judgments. Our experiments show that both proposed methods improve stateof-the-art models much more than using noisy and unfiltered data. ## 2 Related Work Training QA systems with feedback data after models deployment received considerable attention in the past years. Campos et al. (2020) for instance, simulated a scenario where an initial deployed machine reading model is continuously trained through feedback responses, showing promising sandbox performance. Authors simulated feedbacks as positive whenever the answer span predicted by the system matches the gold span exactly, and negative otherwise. A similar scenario was considered by Li et al. (2022), where feedback was collected through crowdworkers. The authors themselves pointed out the limits of these studies as the results were based on artificial data distribution. Other authors explored feedback such as explanation of incorrect responses by chatbots (Li et al., 2016; Weston, 2016). However, the feedback in these studies is automatically generated using heuristics. Similarly, Rajani et al. (2019) collected human explanations for commonsense QA in the form of natural language sequences, and used the data to improve existing models with state-of-theart performance. The main weakness of previous work is that most of existing analyses (i) are based on artificially generated data (Campos et al., 2020; Li et al., 2022, 2016; Weston, 2016) or (ii) use crowdsourced workload (i.e., annotators) to simulate real feedback data. As a consequence, these approaches are not suitable for industrial scenarios since they do not consider noise distribution of real users' generated data. In contrast, our work is based on real user data from a virtual assistant, which provides answer using a web-based QA system. Our results are generalizable to most industrial scenarios targeting open domain QA. ## 3 Operational Setting We consider the task of selecting the correct answer sentence among a set of candidates extracted from web-documents retrieved for a given question. Formally, let Σ∗ be the set of strings (or general sentences) and Q ⊆ Σ∗ be the set of questions according to a certain input distribution. Given an input question q ∈ Q and a set of k sentences {si} k i=1 ∈ Σ∗ k(e.g., returned by a search engine), the answer selector can be defined as a function r : Q × Σ∗ → R, which assigns a probability score to each q/a pair, r(*q, s*i), and returns the answer associated with the highest ranked pair, i.e., arg maxi=1...k r(*q, s*i). We use the state-of-the-art model for answer sentence selection (Garg et al., 2020; Lauriola and Moschitti, 2021), which implements r with a Transformer model. This encodes an input pair (q, si) as [CLS] q [SEP] si *[EOS]* and returns the associated score through a classification ## 3.1 Internal Feedback Dataset (Ifd) We use a popular virtual assistant to collect a large internal dataset constituted by: (i) open domain user questions, (ii) their answers selected from web-documents by a QA system, and (iii) feedback provided by users to the answers. We first pre-processed the data by de-identifying the users. Then, we limited the sample to questions asked in 2022. Finally, we removed questions asked by users who provided feedback to less than 4 answers. Overall, we collected a dataset, D, of 16M tuples, (qi, si, fi, uj ), where qiis an open-domain question, siis the answer generated by the virtual assistant, fi *∈ {−*1, +1} is the binary feedback returned by the user, and uj is the user id. In the remainder of this paper, we refer this resource as Internal Feedback Dataset (IFD). Please note that, as the questions are from real users, for several internal and external regulations, we cannot release the resource for public research. To improve replicability of our findings, we provide empirical results on the impact of our approach and data on public benchmarks. ## 4 Training With De-Noised Feedback The standard approach to de-noise training examples is to assign them different weights according to their reliability. Finding these weights for individual feedback instances is an open problem. We propose two methods: The first is based on a completely new idea (to our knowledge): we observed that different users have different accuracy in assessing the correctness of answers. Since manually labeling millions of users is not feasible, we use an automatic answer selector classifier, trained on off-the-shelf data. The second approach is based on standard collaborative filtering applied to user clusters, which are created by comparing questions using state-of-the-art text similarity techniques. ## 4.1 User Relevance Score We provide a Relevance Score (RS) to users in two steps: First, we measure the agreement between the user annotation and an automatic answer selection model, r, which provides a probability of correctness of the answers for the target questions. More formally, the agreement/similarity between the classifier score and the associated feedback is r(qi, si)·fi, where r(qi, ai) ∈ [−1, +1] is the score and fi *∈ {−*1, +1} is the binary user feedback. Intuitively, the higher is the agreement the higher is the probability that the feedback is correct. In the second step, we computed RS of an user as the average of the agreement scores computed on all examples he/she gave a feedback. In short, we assign RS to the user uj , defined as: $$R S(u_{j})={\frac{\sum_{(q_{i},s_{i},f_{i},u_{j})\in{\mathcal{D}}_{u_{j}}}r(q_{i},s_{i})\cdot f_{i}}{|{\mathcal{D}}_{u_{j}}|}}$$ where Duj ⊂ D is the set of tuples for which the user uj gave a feedback, i.e., Duj = {(qi, si, fi, uk) ∈ D : k = j}. Finally, to obtain accurate training data we consider the annotation of only reliable users. These are obtained by ranking users with RS and discard those having a score below a certain threshold. We build r using a state-of-the-art Electra-large model trained for answer sentence selection as described by (Garg et al., 2020). ## 4.2 Collaborative Filtering Our second filter is based on the intuition that users have different knowledge and they may provide accurate feedback to certain type of questions and low quality feedback to others. Broadly speaking, the feedback assigned by other users to a given answer can provide some insights on the quality of a target user. If a user tends to disagree with the majority of feedback for a given question, then we can filter out the user as their judgment cannot be considered reliable. Let X be user-question matrix where the ji-th entry, i.e., X[j,i] *∈ {−*1, 0, +1}, contains the feedback of the user j to the question i. +1 positive feedback, −1 negative, 0 missing. By construction, the i-th column, that is, X[:,i], contains all feedback collected for a given question. We define the voted feedback, ¯fi, for the question, i, as the average of non-missing judgments, which can be easily computed as the ratio between the L1 and L0 norm of the i-th column vector: ¯fi = ∥X[:,i]∥1 ∥X[:,i]∥0 . Eventually, let f¯ be the voted feedback vector ([ ¯f1, ¯f2 *. . .*]). We define the reliability of the user j as the average proximity between its feedbacks and f¯, that is, X[j,:]f¯⊤. Similarly to the previous approach, we use this reliability score to rank users and to filter those with a score below a certain threshold. The main issue with this approach is the sparsity of the user-question matrix. Typically, questions are unique word sequences, and thus the amount of questions with feedback from different users is low. In order to overcome this limitation, we extend the collaborative approach by aggregating questions (i.e., columns), which are semantically identical or at least similar. We define a cluster of questions, ck, as the set of semantically equivalent questions, and we represent the user-cluster interactions through a matrix Xc. The jk entry of the user-cluster relations matrix, that is Xc [j,k] contains the feedback of the user j for a question in the k-th cluster. If a user provides feedback to multiple questions belonging to the same cluster we average the them. We find question clusters using a standard kmeans algorithm, where the semantic distance between questions is computed with a Transformer model. We used a RoBERTa-large model trained on various semantic similarity tasks1and further fine-tuned on Quora Question Pairs, a popular dataset for question-question similarity tasks. Examples are encoded as [CLS] question [SEP] answer [EOS]. The representations developed in the last Transformer layer associated with the [CLS] token are then used to compute the distance between two questions. For simplicity, we used the standard euclidean distance function. During a preliminary experimentation phase, we set the number of clusters to 50,000. This value represents a good trade-off between quality of the clusters (i.e., we do not have unrelated questions, which look similar, in the same cluster) and the amount of feedback per cluster. We observed that more than 95% of the clusters have at least 100 pieces of feedback. ## 5 Empirical Assessment We divided our experiments in 3 groups: First, we analyze the quality of our filtered data through manual evaluation. Then, we show that the massive amount of noisy feedback can improve the performance of QA models already trained on large-scale high-quality annotated data. Finally, we analyze the impact of the de-noising strategies described in the previous sections. ## 5.1 Qualitative Evaluation The first step of our analysis concerns the evaluation of the proposed filtering strategies. Both filters rank the users according to their likelihood 1The checkpoint is available here https://huggingface. co/sentence-transformers/all-roberta-large-v1 | Sample | top 10% | bottom 10% | |-------------------------|-----------|--------------| | Random | 0.49 | | | Relevance filtering | 0.73 | 0.36 | | Collaborative filtering | 0.53 | 0.38 | Table 1: MCC computed between expert annotators and various samples of feedback, including random sample and top/bottom 10% of the rank produced by our filters. probability of being good annotators. Hence, users in the top of the rank provide higher quality data compared to users in the bottom of the rank. To evaluate this assumption, we ranked all tuples from IFD according to relevance and collaborative scores and we randomly sampled 200 tuples from the top and the bottom 10% of the ranks. Then, we manually analyzed these samples to evaluate and quantify the amount of label errors (noise). We used expert annotators and the Matthews Correlation Coefficient (MCC) (Chicco and Jurman, 2020) to measure the agreement between users' feedback and annotators' labels. This value ranges between -1 (totally uncorrelated) and 1 (perfectly correlated). The higher is the value of MCC on the sample, the higher is the alignment between feedback labels and annotators' judgments (that we consider as gold standard), and consequently the quality of the feedback data. Compared to other metrics, e.g., accuracy or F1, MCC is not affected by class-skewness. We also computed the same score on a random sample from the original unfiltered dataset for further comparison. The results in Table 1 shows that, notwithstanding the limits and simplicity of the proposed filtering strategies, they are clearly able to correctly rank tuples, placing noisy examples lower in the rank. The samples annotated from the top 10% of the ranks have indeed a higher MCC score compared to the tuples sampled from the lower 10%. Also, the relevance filtering seems to work better. ## 5.2 Training With Noisy Feedback Datasets We consider two popular annotated datasets for answer sentence selection tasks: ASNQ (Garg et al., 2020) and WikiQA (Yang et al., 2015). ASNQ is a large-scale resource derived from Natural Questions (Kwiatkowski et al., 2019). For each input question, candidate answer sentences are extracted from a selected wikipedia page. The dataset consists of 20M labeled q/a pairs, making it one of the largest existing resources for this task. WikiQA is a curated small | Configuration | P@1 | MAP | MRR | |-------------------|-------|-------|-------| | IFD | 66.9 | 78.7 | 79.8 | | ASNQ | 83.1 | 88.0 | 89.4 | | WikiQA | 75.5 | 83.6 | 85.0 | | IFD ❀WikiQA | 81.9 | 88.0 | 89.1 | | ASNQ ❀WikiQA | 84.4 | 88.8 | 90.4 | | ASNQ ❀IFD ❀WikiQA | 86.1 | 90.4 | 91.5 | resource consisting of 3,047 questions and 29,258 q/a pairs. Similarly to ASNQ, sentences were extracted from Wikipedia abstracts associated with each input question. In our setting, we consider (i) WikiQA as low-resource target domain, where we test our models, (ii) ASNQ as large-scale annotated resource to improve the QA performance on WikiQA (as described by Garg et al. (2020)), and (iii) IFD to study the impact of feedback data on the target domain. Model selection We start from an Electra-base (110M parameters, 12 layers) public checkpoint. During the training, we set (i) the batch size to 1024 q/a pairs, (ii) the max sequence length for the input of the Transformer to 128 tokens, (iii) the max training epochs to 5 for ASNQ and IFD, and to 10 for WikiQA, and (iv) a constant lr schedule with linear warm-up of 0.1 epoch. We used WikiQA validation set to monitor the validation loss after each epoch and, in case, terminate the training, and to select the optimal learning rate, with values [1, 2, 5] × 10−[5,6]. We used ASNQ, IFD, and WikiQA to train the models with different configurations described in the next sections, and we used the test split of WikiQA as final test set. For each experiment, we train and evaluate models 3 times and average the final results computed on the test set. Training We evaluated the following training ## Strategies: - 1-step training - We fine-tune a public Transformer checkpoint on IFD, ASNQ, or WikiQA and test the models on WikiQA. This allows us to isolate and quantify the impact of large noisy data (IFD), large high-quality data (ASNQ), and limited but in-domain data (WikiQA). - 2-steps training - Inspired by recent research in answer sentence selection (Garg et al., 2020), we first train models on large datasets, i.e., IDF or ASNQ, and then we further fine-tune the models on the target domain (WikiQA). - 3-steps training - We sequentially fine-tuned the model on (i) ASNQ, (ii) IFD, and (iii) WikiQA. This experiment shows that, even in scenarios where large amount of labeled data is available for training, feedback helps the model and improves the final accuracy. The results of these experiments (see Table 2) show multiple keypoints: First, large resources do not necessarily improve the performance if their quality is poor, e.g., small but high-quality indomain training data (WikiQA, 30k q/a pairs) performs better than large noisy dataset (IFD). Second, both ASNQ and IFD significantly improve the performance on WikiQA when using sequential finetuning approaches (lines 4-5). Not surprisingly, the improvement of ASNQ is higher as it contains highquality annotations. Moreover, a last fine-tuning on the target domain (WikiQA) always improves the performance (lines 1-2 compared to 4-5). Finally, the combination of high and low quality large resources (line 6) further improves the performance. Although IFD contains a considerable amount of noise, it is still a valuable resource to improve the performance of the model. Even thought a large resource is available, i.e., ASNQ, feedback data is still rather valuable. ## 5.3 Relevance And Collaborative Filtering Evaluation We analyzed the impact of de-noising mechanisms to improve the quality of data and consequently the final performance. For each filter, relevance and collaborative, we first compute the rank of users as described in Section 4, then, we consider the following filtered IFD versions: Full We use the full set, regardless of the produced ranks. This version of IFD represents the baseline where the filtering is not used. Top We use the top 10% of tuples from IFD according to the rank of the filter. This allows us to restrict the training to high-quality feedback. Best We train models with 10%, 20%, 30%,. . . ,100% of IFD selected on top on the rank of the filter. Then, we select the model with lowest validation loss. This helps finding an optimal trade-off between data quantity and quality. Relevance f. Collaborative f. Configuration P@1 MAP MRR P@1 MAP MRR Full (100%) 80.4 86.2 87.7 80.4 86.2 87.7 Best (10-100%) 80.6 86.6 88.2 **81.3 87.0 88.4** Random (10%) 78.2 85.1 86.7 78.2 85.1 86.7 Top (10%) **81.3 87.0 88.5** 79.8 86.6 87.9 Full ❀WikiQA 86.1 90.4 91.5 **86.1 90.4 91.5** Best ❀WikiQA 86.8 90.7 91.8 85.8 **90.4** 91.4 Ran. ❀WikiQA 85.9 90.1 91.4 85.9 90.1 91.4 Top ❀WikiQA **87.0 90.7 92.0** 84.6 89.5 90.6 Random We use 10% of tuples from IFD randomly sampled. This baseline emphasizes the effect of the filter compared to the usage of Top tuples. Both strategies indeed, **Random** and Top, use the same amount of data. For each subset and filter, we sequentially trained an Electra-base on ASNQ and then on IDF (filtered). Results in Table 3 show multiple key aspects: First, finding the optimal trade-off between quantity and quality (**Best**) usually improves the performance compared to the unfiltered IFD (**Full**), suggesting that the filtering methods work as expected. The only exception occurs when using collaborative filtering and fine-tuning models on WikiQA. Note that this approach is computationally expensive as we train a model for each possible threshold (10%, 20%. . . ). Second, using only the Top 10% of the data further improves the results when adopting the relevance filtering. This indicates that: (i) relevance filtering works well and can be used to significantly reduce the amount of data by a magnitude, while improving the QA performance; (ii) collaborative filtering shows some promising results only when models are not fine-tuned on the target domain. However, both approaches represent a solid base for future research in this field. Note that these results corroborate our manual analysis showed in Table 1. Both experiments, manual rank evaluation and models training, suggest that the relevance filtering provides, compared to collaborative approach, a better rank and thus a better data filtering and final performance. ## 6 Conclusion Feedback data represents a huge and convenient source of training data, which can be used to improve the performance of deployed QA systems. However, the noise affecting feedback can degrade model performance. This paper introduces two ML approaches to filter feedback data and to reduce the amount of noise. Our filters are based on the assumption that users act differently from each other. Thus, their behaviour induces different reliability, which if modeled correctly can help to build more effective training data. We used a large set of question, answer, and feedback tuples (16M) sampled from a commercial virtual assistant to validate this hypothesis. Our extensive empirical assessment clearly shows that filtered feedback can significantly improve the performance of a deployed QA system, even when the models are trained on massive high-quality annotated resources. Note that this work does not aim to compare different filtering methods to elect a superior approach. We conjecture that the collaborative filtering can be further improved, for instance by deeply analyzing different clustering approaches or embeddings extractors. On the contrary, our goal is (i) to highlight the importance and impact of using real feedback data to improve the performance of industrial QA models, and (ii) to provide insights for future research directions. To the best of our knowledge, this work represents the first analysis on real feedback data and its integration into model training. These findings reveal promising directions to improve deployed QA systems. ## 7 Limitations This paper introduces two heuristic approaches to filter noisy feedback data. Although we showed that these simple methods improve the performance of QA models, they have various limitations and they represent only an initial step for future research on real feedback data. The core of the relevance filtering is based on the assumption that correct feedback occur when the model and the user agree on the labels. This approach may introduce a selection bias towards tuples associated with "simpler" q/a pairs, which are already well understood by the model and thus potentially ineffective for training. Although the model can easily discard q/a pairs whose feedback are clearly different, the risk is that uncertain pairs close to the classification boundary (i.e., model score close to 0) are penalized and easily filtered as they will receive a reliability score close to 0. Regarding the collaborative approach, the main limitation concerns the clustering strategy adopted to aggregate questions. On one hand, we want to reduce as much as possible the number of clusters such that we have a sufficiently high amount of feedback per cluster. This makes the proximity computation between users and the voted feedback vector robust. On the other hand, the clustering may introduce additional noise by aggregating different and nonequivalent questions into the same cluster. This aspect may reduce the reliability of the voted feedback vector. Finally, as mentioned in the previous sections, feedback data and q/a pairs used in this work come from real users traffic. For this reason, we only described the high-level approach of integrating feedback and we showed the impact on public benchmarks. A harsh limitation is caused by the private nature of the customer data, which cannot be released for public research. ## References Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Eneko Agirre, and Gorka Azkune. 2020. Improving conversational question answering systems after deployment using feedback-weighted learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2561–2571, Barcelona, Spain (Online). International Committee on Computational Linguistics. Yatin Chaudhary, Pankaj Gupta, Khushbu Saxena, Vivek Kulkarni, Thomas Runkler, and Hinrich Schütze. 2020. TopicBERT for energy efficient document classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1682–1690, Online. Association for Computational Linguistics. Davide Chicco and Giuseppe Jurman. 2020. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. *BMC genomics*, 21(1):1–13. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7780–7788. Patrick Huber, Armen Aghajanyan, Barlas Oguz, Dmytro Okhonko, Scott Yih, Sonal Gupta, and Xilun Chen. 2022. CCQA: A new web-scale question answering dataset for model pre-training. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pages 2402–2420, Seattle, United States. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Bernhard Kratzwald, Stefan Feuerriegel, and Huan Sun. 2020. Learning a Cost-Effective Annotation Policy for Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3051–3062, Online. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. *Transactions of the* Association for Computational Linguistics, 7:453– 466. Ivano Lauriola and Alessandro Moschitti. 2021. Answer sentence selection using local and global context in transformer models. In *European Conference on* Information Retrieval, pages 298–312. Springer. Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823. Zichao Li, Prakhar Sharma, Xing Han Lu, Jackie CK Cheung, and Siva Reddy. 2022. Using interactive feedback to improve the accuracy and explainability of question answering systems post-deployment. arXiv preprint arXiv:2204.03025. Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. 2022. A survey of transformers. *AI Open*. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1736– 1745, Melbourne, Australia. Association for Computational Linguistics. Dimitris Pappas, Prodromos Malakasiotis, and Ion Androutsopoulos. 2022. Data augmentation for biomedical factoid question answering. In *Proceedings of* the 21st Workshop on Biomedical Language Processing, pages 63–81, Dublin, Ireland. Association for Computational Linguistics. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Umaa Rebbapragada and Carla E Brodley. 2007. Class noise mitigation through instance weighting. In *European conference on machine learning*, pages 708– 715. Springer. Arij Riabi, Thomas Scialom, Rachel Keraron, Benoît Sagot, Djamé Seddah, and Jacopo Staiano. 2021. Synthetic data augmentation for zero-shot crosslingual question answering. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 7016–7030, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiang-wen Sun, Feng-ying Zhao, Chong-jun Wang, and Shi-fu Chen. 2007. Identifying and correcting mislabeled training instances. In *Future generation communication and networking (FGCN 2007)*, volume 1, pages 244–250. IEEE. Jason E Weston. 2016. Dialog-based language learning. Advances in Neural Information Processing Systems, 29. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain question answering. In *Proceedings of the 2015 conference on empirical methods in natural language* processing, pages 2013–2018. Chen Zhao, Chenyan Xiong, Jordan Boyd-Graber, and Hal Daumé III. 2021. Distantly-supervised dense retrieval enables open-domain question answering without evidence annotation. In *Proceedings of the* 2021 Conference on Empirical Methods in Natural Language Processing, pages 9612–9622, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
jiang-etal-2023-spm
{SPM}: A Split-Parsing Method for Joint Multi-Intent Detection and Slot Filling
https://aclanthology.org/2023.acl-industry.64
In a task-oriented dialogue system, joint intent detection and slot filling for multi-intent utterances become meaningful since users tend to query more. The current state-of-the-art studies choose to process multi-intent utterances through a single joint model of sequence labelling and multi-label classification, which cannot generalize to utterances with more intents than training samples. Meanwhile, it lacks the ability to assign slots to each corresponding intent. To overcome these problems, we propose a Split-Parsing Method (SPM) for joint multiple intent detection and slot filling, which is a two-stage method. It first splits an input sentence into multiple sub-sentences which contain a single-intent, and then a joint single intent detection and slot filling model is applied to parse each sub-sentence recurrently. Finally, we integrate the parsed results. The sub-sentence split task is also treated as a sequence labelling problem with only one entity-label, which can effectively generalize to a sentence with more intents unseen in the training set. Experimental results on three multi-intent datasets show that our method obtains substantial improvements over different baselines.
# Spm: A Split-Parsing Method For Joint Multi-Intent Detection And Slot Filling Sheng Jiang1∗ , Su Zhu1*, Ruisheng Cao2**, Qingliang Miao**1and Kai YU1,2 1AI Speech Co, .Ltd., Suzhou, China 2 Shanghai Jiao Tong University, Shanghai, China ## Abstract In a task-oriented dialogue system, joint intent detection and slot filling for multi-intent utterances become meaningful since users tend to query more. The current state-of-the-art studies choose to process multi-intent utterances through a single joint model of sequence labelling and multi-label classification, which cannot generalize to utterances with more intents than training samples. Meanwhile, it lacks the ability to assign slots to each corresponding intent. To overcome these problems, we propose a Split-Parsing Method (SPM) for joint multiple intent detection and slot filling, which is a two-stage method. It first splits an input sentence into multiple sub-sentences which contain a single-intent, and then a joint single intent detection and slot filling model is applied to parse each sub-sentence recurrently. Finally, we integrate the parsed results. The sub-sentence split task is also treated as a sequence labelling problem with only one entitylabel, which can effectively generalize to a sentence with more intents unseen in the training set. Experimental results on three multi-intent datasets show that our method obtains substantial improvements over different baselines. ## 1 Introduction With the development of natural language technologies, the task-oriented dialogue system has become a significant practical application. It is widely applied in many industrial scenarios. One critical component in the task-oriented dialogue system is Spoken Language Understanding (SLU) (Young et al., 2013), which is further decomposed into two sub-tasks, namely intent detection and slot filling (Tur and De Mori, 2011). The slot filling task aims to convert the user utterance into a BIO label sequence of equivalent length. As for intent detection, it is essentially a sentence classification ∗Sheng Jiang and Su Zhu are co-first authors and contribute equally to this work. task which may have one or more labels. Stateof-the-art studies tend to solve these two sub-tasks through a joint model (Goo et al., 2018; Liu et al., 2019), since slots and intents are highly correlated. Previous literature in SLU mainly focuses on utterances with a single intent. Although classic models (Qin et al., 2019) achieve remarkable performances on those single-intent datasets, they neglect the realistic situation where the user utterance may contain multiple intents. Recently, researchers switch their attention to multi-intent benchmarks, such as MixATIS and MixSNIPS (Qin et al., 2020, 2021; Xing and Tsang, 2022a). An intuitive solution is to replace the original multi-class intent detection module into multi-label classification, see Figure 1(a). More advanced methods attempt to improve upon this backbone model. For example, Qin et al. (2020, 2021) proposes AGIF and GL-GIN, which both integrate the correlation between slots and intents into model design. Nonetheless, the separate prediction of multiple intents and slots leads to the failure of assigning appropriate slots to each intent. This mis-allocation of slots to intents may cause execution errors in a practical task-oriented dialogue system. Furthermore, the generalization capability of previous models is less investigated regarding the number of intents. For instance, if the model is merely trained on samples containing 1-3 intents, it would perform poorly on utterances with more than 3 intents. To this end, we propose a Split-Parsing Method (SPM) for joint multi-intent detection and slot filling. SPM is a two-stage SLU system. At the first stage, the utterance is split into sub-sentences, each containing exactly one intent. These sub-sentences are independent and together constitute the complete semantic representation. At the second stage, each sub-sentence is parsed by a traditional SLU model designed for single-intent. In this way, each slot is automatically aligned to their superior intent in the sub-sentence. Eventually, all parsing 668 ![1_image_0.png](1_image_0.png) results are aggregated through post-processing. As illustrated in Figure 1(b), the utterance "list LA and what class is fare code Q" is firstly split into two sub-sentences. Next, each sub-sentence is fed into the joint model of intent detection and slot filling to obtain the corresponding intent and slots. The slotvalue pair "city_name=LA" is directly assigned to the sub-sentence "*list LA*" with intent "atis_city". Evidently, this method can effectively generalize to complicated utterances with more intents. The proposed SPM is evaluated on two public English datasets (MixATIS and MixSNIPS, Qin et al., 2020), and a customized Chinese dataset which is collected from an in-vehicle dialog system. Experimental results demonstrate that the SPM can 1) achieve nearly perfect performances on the subsentence split task at the first stage, 2) attain stable improvements compared to one-stage method regardless of the model choice at the second stage, and 3) generalize better towards examples containing more intents unseen during training. ## 2 Related Work From Single to Multiple Intents To deal with utterances with a single intent, most previous works (Liu and Lane, 2016; Hakkani-Tür et al., 2016; Zhang and Wang, 2016; Goo et al., 2018; Qin et al., 2019; Liu et al., 2019; Wang et al., 2018; Chen et al., 2019; Zhu et al., 2020) choose to tackle the intent detection and slot filling tasks in a multi-tasking manner. For sentences with multiple intents, several works (Gangadharaiah and Narayanaswamy, 2019; Qin et al., 2020, 2021; Xing and Tsang, 2022a,b) introduce a multi-label classifier to individually predict each possible intent. Recently, (Qin et al., 2021; Xing and Tsang, 2022a) proposed to model relationships between intents and slots, which takes into account the interaction between these two sub-tasks. However, previous literature fails to predict the alignment between intents and slots. Thus, it cannot determine which intent to assign for each slot. At the same time, in practical application scenarios, we need to design non-aligned slots. If we use a joint slot tagger, it is impossible to align non-aligned slots in multi-intent with their corresponding intents. In this work, we propose a two-stage pipelined SLU system to tackle the slot-intent assignment problem. Generalization to More Intents The transfer performances in more intents is rarely studied. Meng et al. (2022) proposed to use a sequence-tosequence model (Dialo-USR) to generate all subsentences for joint multi-intent detection and slot filling. However, it also suffers from the poor generalization capability when confronted with more intents. Moreover, restricted by the auto-regressive decoding process, such a generative model introduces more overheads especially in the inference speed. Thus, it is impractical to be deployed in industrial scenarios. In contrast, we exploit a tokenlevel sequence labeling model to act as the sentence splitting model, It shows better performances in both the accuracy and inference speed at the first stage. ## 3 Approach The differences between the one-stage SLU system and our proposed two-stage SLU system (SPM) are illustrated in Figure 1. In the upper part, the user's multi-intent utterance is directly passed into a joint model of multi-label intent detection and slot filling, which is trained on multi-intent data. A token-level slots sequence and multiple intents are predicted, while it is not possible to assign each slot to the corresponding intent, since alignments between slots and intents are not modeled in this method. The below sub-figure of Fig. 1 illustrates our proposed method, where a multi-intent sentence is first split into sub-sentences by our split model (§3.1). These sub-sentences will be fed into a joint model of intent detection and slot filling separately. Thus, we can catch slot results for each individual intent. Meanwhile, the joint model of intent detection and slot filling exploited in the onestage SLU can be applied into SPM without any change, which is much portable and easy-to-use. ## 3.1 Split Model As shown in Fig. 2, to split a multi-intent sentence into sub-sentences, we regard it as a sequence labeling problem. We treat each sub-sentence as a separate slot (named as "snt"), and represent the output sequence in the way of BIO tags. For example, the annotation result of "list LA and what class is fare code Q" should be "B-snt I-snt O Bsnt I-snt I-snt I-snt I-snt I-snt" in Fig. 2. It should be noted that annotations of conjunctions in multiintent sentences at the token level are assigned with "O". The split model can be implemented as any sequence labelling model, such as bidirectional ![2_image_0.png](2_image_0.png) ![2_image_1.png](2_image_1.png) LSTM (Graves, 2012), Bert model (Devlin et al., 2019). These sequence labelling models could potentially perform better for longer multi-intent sentences than those sentences in the training set. ## 4 Experiments In this section, our SPM is trained and tested in three datasets and compared with different baseline models. From our experimental results in English dataset, adding split models can improve slot performance and our models shows better generalization ability in more intents. The evaluation results in Chinese dataset shows our SPM is faster and generalize better compared with other two-stage SLU system. ## 4.1 Datasets And Metrics Regarding the English dataset, we conduct our experiments on MixATIS and MixSNIPS (Qin et al., 2020). For Chinese, we experiment on our customized dataset from realistic production scenarios. Detailed statistics are provided in Table 1. It is worth mentioning that in our Chinese dataset, due to the needs of real scenarios, we have designed some non-aligned slots. As shown in Fig. 3, the slot "adjustment:temperature" is non-aligned and necessary. | Datatset | Language | Train | Validation | Test | |------------|------------|---------|--------------|--------| | MixATIS | English | 13,162 | 756 | 828 | | MixSNIPS | English | 39,776 | 2,198 | 2,199 | | Ours | Chinese | 800,000 | 50,000 | 20,000 | Table 1: Statistics of multi-intent SLU benchmarks. The evaluation is based on multiple intent detection accuracy, sub-sentence accuracy, F1 score for slot filling, and overall accuracy for the sentencelevel semantic frame parsing. Notably, to align slots and intents, the slot F1 score in Chinese dataset is base on slot intent index. However, following previous works, the slot F1 score is not based on index in English dataset. ## 4.2 Implementation Details In our two-stage SLU system, the basic task is to train the sentence split model. Since our split data is labeled based on token level, any sequence labeling models can be trained directly as the split model. Split Labels Generation: Since the multi-intent sentences in MixATIS and MixSNIPS are actually generated by the combination of single-intent sentences in ATIS (Hemphill et al., 1990) and SNIPS (Coucke et al., 2018). We extracted and labeled all sub-sentences from MixATIS and MixSNIPS in token level BIO tags. For example, when the utterance is "list LA and what class is fare code Q", the output token-level tags should be "B-snt I-snt O B-snt I-snt I-snt I-snt I-snt I-snt". During the construction of Chinese multi-intent dataset, we first extracted a certain number of single-intent utterances from production scenarios which are mainly in-car instructions. Split Model: In our experiment, we use BiModel (Wang et al., 2018) and Bert(Devlin et al., 2019) as the split models in MixATIS and MixSNIPS. To compare with the sequence-to-sequence split model (Meng et al., 2022) in Chinese utterances, we also train Bert-based models( MiniRBTh256 (Cui and Yang, 2022), Bert-wwm (Cui et al., 2021)) and mT5-small (Xue et al., 2020) for sentence split. Intent Detection and Slot Filling Model: For the task of intent detection and slot filling, we directly use the open source model weights of AGIF (Qin et al., 2020) and GL-GIN (Qin et al., 2021). Also, we finetune the Bert model in the English datasets and our Chinese dataset for intent detection and slot filling. ## 4.3 Baselines In one-stage SLU systems, we compare our our SPM with the following baselines: Bi-Model (Wang et al., 2018), AGIF (Qin et al., 2020), GLGIN(Qin et al., 2021), ReLa-Net (Xing and Tsang, 2022b), Co-guiding Net (Xing and Tsang, 2022a) and Bert (Devlin et al., 2019). To assign slots to corresponding intents in onestage model, we also trained Bert in index labeling method. Slot labels based on index will have a suffix (__MI_X) to indicate the intent number. For instance, the index-based slots of the utterance "list LA and what class is fare code Q" should be "O B-city_name__MI_1 O O O O O O Btoloc.cityname__MI_2". Therefore, the slot can be aligned with the intent through the suffix of the slot. ## 4.4 Results In English Datasets Using our method, adding a split model before intent detection and slot filling can help the original slot models have better performance. In this sub-section, we have evaluated our two-stage SLU system in the test sets of MixATIS and MixSNIPS, compared with different baselines. The first evaluation is in 1-3 intents and the second is in 3-5 intents. Table 2 shows the intent detection and slot filling results of our two-stage SLU systems and baseline models in the original test sets of MixATIS and MixSNIPS. From Table 2, we observe that: 1. Adding split models improves the performance of baseline slot models to a certain extent. For instance, in MixATIS, the one-stage AGIF achieves 41.8 in overall accuracy, while it achieves 43.1 when we add the split model ad Bi-Model. 2. Our two-stage models are still competitive compared with the one-stage models with the best performance (Rela-Net and Co-guiding Net). Even in MixSNIPS, the system with Bert-base (split model and slot model) gets the best slot F1 96.0 and overall accuracy 83.2. Since the test utterances in Table 2 only contain 1-3 intents, we also want to verify whether our SPM can achieve good performance in utterances with more intents unseen in the training set. Based on MixATIS and MIXSNIPS, we construct another | Split Model | Slot Model | MixATIS | MixSNIPS | | | | | |----------------|--------------|-------------|------------|------------|-------------|------|------| | Slot F1 | Intent Acc | Overall Acc | Slot F1 | Intent Acc | Overall Acc | | | | Bi-Model | 85.5 | 72.3 | 39.1 | 86.8 | 95.3 | 53.9 | | | AGIF | 87.8 | 75.6 | 41.8 | 93.3 | 96.3 | 70.0 | | | GL-GIN | 88.3 | 76.3 | 43.5 | 93.8 | 95.6 | 71.0 | | | ReLa-Net | 90.1 | 78.5 | 52.2 | 94.7 | 97.6 | 76.1 | | | Co-guiding Net | 89.8 | 79.1 | 51.3 | 95.1 | 97.7 | 77.5 | | | Bert_index | 85.5 | 82.6 | 46.1 | 95.4 | 95.4 | 81.7 | | | - | Bi-Model | 86.7 | 75.0 | 42.3 | 90.7 | 94.0 | 61.3 | | AGIF | 88.3 | 77.3 | 43.1 | 93.0 | 95.5 | 68.9 | | | Bi-Model | GL-GIN | 88.4 | 77.1 | 43.7 | 93.9 | 94.8 | 71.4 | | Bert-base | 86.3 | 77.4 | 49.2 | 94.8 | 96.4 | 77.5 | | | Bi-Model | 86.7 | 75.2 | 42.4 | 89.5 | 93.7 | 57.7 | | | AGIF | 88.3 | 77.4 | 43.2 | 94.2 | 95.1 | 73.8 | | | Bert-base | GL-GIN | 88.4 | 77.2 | 43.7 | 95.1 | 94.2 | 76.2 | | Bert-base | 86.3 | 77.9 | 49.3 | 96.0 | 95.9 | 83.2 | | Table 2: Results on the original test sets of MixATIS and MixSNIPS. | Split Model | Intents | MixATIS | MixSNIPS | |--------------------|------------------|-----------|------------| | Sub-sentence Acc | Sub-sentence Acc | | | | mT5-base | 95.1 | 74.9 | | | mT5-large | 97.6 | 88.8 | | | mT5-xl | 98.1 | 98.6 | | | Bi-Model | 99.4 | 99.4 | | | Bert-base | 99.7 | 99.5 | | | 34.9 | 54.0 | | | | mT5-small Bi-Model | 86.4 | 67.4 | | | 3-5 | | | | | Bert-base | 99.2 | 99.6 | | | 1-3 | | | | | Slot F1 | Intent Acc | Overall Acc | Slot F1 | Intent Acc | Overall Acc | | |------------|--------------|---------------|-----------|--------------|---------------|------| | AGIF | 87.6 | 48.4 | 20.5 | 90.5 | 39.3 | 20.5 | | GL-GIN | 88.8 | 39.0 | 17.6 | 92.5 | 28.4 | 16.7 | | Bert_index | 88.0 | 14.9 | 7.5 | 93.9 | 16.7 | 12.4 | | Bi-Model | 88.4 | 62.9 | 24.9 | 86.6 | 49.8 | 20.4 | | AGIF | 88.4 | 66.9 | 27.7 | 91.8 | 50.8 | 27.6 | | GL-GIN | 89.1 | 66.4 | 27.3 | 92.9 | 50.3 | 29.5 | | Bert-base | 88.5 | 68.6 | 29.6 | 92.6 | 50.9 | 33.7 | | Bi-Model | 88.5 | 80.4 | 31.2 | 89.4 | 90.7 | 38.1 | | AGIF | 88.5 | 85.5 | 34.8 | 94.5 | 92.1 | 58.5 | | GL-GIN | 89.2 | 85.0 | 34.3 | 95.0 | 91.6 | 62.1 | | Bert-base | 88.7 | 87.3 | 37.8 | 96.1 | 93.5 | 72.8 | test set with 3-5 intents. The experimental results in Table 3 show that our method has better generalization ability than the one-stage models in more intents. Table 3 shows the intent detection and slot filling results on the multi-intent transfer test sets. The multi-intent transfer test sets only contain the utterances of 3-5 intents, which have never been experienced during the training of models. In Table 3, we can observe that the one-stage SLU systems perform poorly in the accuracy of slots and intents. For instance, GL-GIN only gets 16.7 overall accuracy in MixSNIPS, while the two-stage SLU with split Bi-Model and slot GL-GIN achieves 29.5. At the same time, it can be seen from Table 3 that the split model can still complete the sentence split task to a certain extent on the utterances with more intents. Especially when we use Bert as split and slot models, the intent accuracy in MixATIS and MixSNIPS are 87.3 and 93.5. Interestingly, we found that the use of the pretraining model as the split model has better performance in both 1-3 intents and more intents. Therefore, we evaluated the sub-sentence accuracy of the split models. As shown in Table 4, sequence labeling models (Bi-Model and Bert) all achieve higher sub-sentence accuracy than sequence-to-sequenec | - | |--------------------| | Bi-Model Bert-base | | Split Model | Slot Model | Params of | 1-5 Intents | 6-10 Intents | Speed (split) | Speed (split + slot) | | | |-------------------|---------------|-------------|---------------|----------------|-----------------|------------------------|--------|--------| | Split Model | Slot F1 | Overall Acc | Slot F1 | Overall Acc | (ms/sentence) | | | | | (ms/sentence) | | | | | | | | | | - | MiniRBT-index | - | 97.74 | 84.51 | 74.57 | 5.33 | - | 166.60 | | Bert-wwm-index | 98.24 | 88.72 | 71.57 | 4.10 | - | 290.48 | | | | 300M | 98.46 | 93.61 | 95.71 | 78.79 | 41.10 | 330.06 | | | | mT5-small MiniRBT | 10.4M | 99.46 | 95.76 | 98.99 | 88.35 | 0.99 | 291.01 | | | MiniRBT | | | | | | | | | | Bert-wwm | 110M | 99.50 | 95.92 | 99.17 | 89.26 | 2.04 | 290.00 | | Table 5: Evaluation results and inference speed on Chinese multi-intent dataset. ![5_image_0.png](5_image_0.png) models (mT5). And this is more obvious in the utterance with more intents, like Bert-base achieves 99.2 sub-sentence accuracy but mT5-small only gets 34.9. ## 4.5 Results In Chinese Datasets Our SPM still have advantages compared the other two stage SLU system. Here we evaluate the split model as the sequence-to-sequence model (Meng et al., 2022)) and the sequence labeling model (Bert) in slots and inference speeds. The sequenceto-sequence model used in (Meng et al., 2022) is mT5 (Xue et al., 2020). And the sequence labeling models we used for split are MiniRBT (Cui and Yang, 2022) and Bert-wwm (Cui et al., 2021). All split models in Table 5 are trained in 1-5 intents. Table 5 shows the intent detection and slot filling results on utterances with 1-5 intents and 6-10 intents. We can note that the model using sequence labeling is higher in slot filling F1 and overall accuracy than the model using sequence-to-sequence, even mT5-small has much more parameters than MiniRBT and Bert-wwm. In terms of the generalization ability in more intents, the use of sequence labeling as split model is more advantageous. From Table 5, the slot F1 and overall accuracy when using mT5-small as split model are 95.71 and 78.79, while for MiniRBT, they are 98.99 and 88.35. When comparing the inference speed, our models are also faster. Table 5 shows the time required by different models to complete a complete slot filling task on a multi-intent utterance, that is, the time of sentence split and the slot filling for all sub-sentences. In Table 5, each utterance needs 40.10 ms to split and slot filling if the split model is mT5-small, however, for MiniRBT and Bert-wwm, the time is only 0.99 ms and 2.04 ms. What's more, the added time of our method is negligible in the whole process as shown in the right end of Table 5. ## 4.6 Case Study To demonstrate how our two-stage SLU system outperforms one-stage SLU systems, we present the results of intent detection and slot filling of a case with 4 intents in Fig. 4. Fig. 4 (A) shows the correct slot labels and intent labels. The utterance is split into 4 sub-sentences and each slot is aligned with its intent. Meanwhile, as the intent increases, the sentence length also grows. This would make one stage model hard to detect enough intents and accurate slots. For example, the detected intents of Fig. 4 (B) miss the real first and last intents. ## 5 Future Work Due to split-parse approach, when there are errors in sentence segmentation, cumulative errors are inevitable. The next step is to optimize for the situation of cumulative errors. There are two directions for optimization. Firstly, the accuracy of the sentence segmentation model can be further improved to reduce the probability of cumulative errors. Secondly, in the slot filling model, the model can be designed to support utterances with 1-2 intents. When the sentence split model incorrectly splits multiple sub-sentences into one, a slot filling model that supports multiple intents can correctly tag slots and detect intents. ## 6 Conclusion In our paper, we propose a two-stage SLU system based on the split-parsing method. With plugging our split model into the original SLU system, the performance can be improved. Compared with the commonly used one-stage SLU systems, our method can better generalize in more intents unseen in training. Meanwhile, the split-parsing method can effectively align slots with their corresponding intents in the segmented sentences. And compared with other two-stage SLU systems using sequence-to-sequence as the split model, our model can achieve better performance of intent and slot filling detection with higher inference speed. ## Acknowledgements This research was funded by Jiangsu Key R&D Plan (Industry Foresight and Key Core Technologies) 2022 under Grant BE2022059-1, and Scientific and Technological Innovation 2030 under Grant 2021ZD0110900. ## References Qian Chen, Zhu Zhuo, and Wen Wang. 2019. BERT for joint intent classification and slot filling. *CoRR*, abs/1902.10909. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2021. Pre-training with whole word masking for chinese bert. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3504–3514. Yiming Cui and Ziqing Yang. 2022. Minirbt. https: //github.com/iflytek/MiniRBT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proc. NAACL*, pages 4171–4186. Rashmi Gangadharaiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In *NAACL*, pages 564–569. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757. Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg. Dilek Hakkani-Tür, Gokhan Tur, Asli Celikyilmaz, YunNung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In *Proc. INTERSPEECH*, pages 715–719. Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In *Speech and Natural Language:* Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. In *Proc. INTERSPEECH*, pages 685–689. Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. 2019. Cm-net: A novel collaborative memory network for spoken language understanding. *arXiv preprint arXiv:1909.06937*. Haoran Meng, Zheng Xin, Tianyu Liu, Zizhen Wang, He Feng, Binghuai Lin, Xuemin Zhao, Yunbo Cao, and Zhifang Sui. 2022. Dialogusr: Complex dialogue utterance splitting and reformulation for multiple intent detection. *arXiv preprint arXiv:2210.11279*. Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. *arXiv preprint arXiv:1909.02188*. Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, and Ting Liu. 2021. Gl-gin: Fast and accurate non-autoregressive model for joint multiple intent detection and slot filling. arXiv preprint arXiv:2106.01925. Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. Agif: An adaptive graph-interactive framework for joint multiple intent detection and slot filling. *arXiv* preprint arXiv:2004.10087. Gokhan Tur and Renato De Mori. 2011. *Spoken language understanding: Systems for extracting semantic information from speech*. John Wiley & Sons. Yu Wang, Yilin Shen, and Hongxia Jin. 2018. A bi-model based rnn semantic frame parsing model for intent detection and slot filling. arXiv preprint arXiv:1812.10235. Bowen Xing and Ivor Tsang. 2022a. Co-guiding net: Achieving mutual guidances between multiple intent detection and slot filling via heterogeneous semanticslabel graphs. In *Proceedings of the 2022 Conference* on Empirical Methods in Natural Language Processing, pages 159–169. Bowen Xing and Ivor W Tsang. 2022b. Group is better than individual: Exploiting label topologies and label relations for joint multiple intent detection and slot filling. *arXiv preprint arXiv:2210.10369*. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. *arXiv preprint* arXiv:2010.11934. Steve Young, Milica Gašic, Blaise Thomson, and Ja- ´ son D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. *Proceedings of the* IEEE, 101(5):1160–1179. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In *Proc. IJCAI*, pages 2993–2999. Su Zhu, Ruisheng Cao, and Kai Yu. 2020. Dual learning for semi-supervised natural language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:1936–1947.
zhang-etal-2023-nag
{NAG}-{NER}: a Unified Non-Autoregressive Generation Framework for Various {NER} Tasks
https://aclanthology.org/2023.acl-industry.65
Recently, the recognition of flat, nested, and discontinuous entities by a unified generative model framework has received increasing attention both in the research field and industry. However, the current generative NER methods force the entities to be generated in a predefined order, suffering from error propagation and inefficient decoding. In this work, we propose a unified non-autoregressive generation (NAG) framework for general NER tasks, referred to as NAG-NER. First, we propose to generate entities as a set instead of a sequence, avoiding error propagation. Second, we propose incorporating NAG in NER tasks for efficient decoding by treating each entity as a target sequence. Third, to enhance the generation performances of the NAG decoder, we employ the NAG encoder to detect potential entity mentions. Extensive experiments show that our NAG-NER model outperforms the state-of-the-art generative NER models on three benchmark NER datasets of different types and two of our proprietary NER tasks.{\textbackslash}footnote{Code will be publicly available to the research community upon acceptance.}
# Nag-Ner: A Unified Non-Autoregressive Generation Framework For Various Ner Tasks Xinpeng Zhang1, Ming Tan2∗, Jingfan Zhang3∗**, Wei Zhu**4∗† 1NetEase Youdao 2Southern University of Science and Technology 3 University of Ottawa 4East China Normal University ## Abstract Recently, the recognition of flat, nested, and discontinuous entities by a unified generative model framework has received increasing attention both in the research field and industry. However, the current generative NER methods force the entities to be generated in a predefined order, suffering from error propagation and inefficient decoding. In this work, we propose a unified non-autoregressive generation (NAG) framework for general NER tasks, referred to as NAG-NER. First, we propose to generate entities as a set instead of a sequence, avoiding error propagation. Second, we propose incorporating NAG in NER tasks for efficient decoding by treating each entity as a target sequence. Third, to enhance the generation performances of the NAG decoder, we employ the NAG encoder to detect potential entity mentions. Extensive experiments show that our NAG-NER model outperforms the state-of-the-art generative NER models on three benchmark NER datasets of different types and two of our proprietary NER tasks. ## 1 Introduction Named entity recognition (NER) is a fundamental task in the field of information extraction. It is the basic task for many natural language processing applications like dialogue systems, document analysis, and search engines. Currently, NER tasks can be divided into three subtasks (Yan et al., 2021), i.e., flat NER, nested NER, and discontinuous NER, as illustrated in Figure 1. Recently, researchers have grown interested in tackling the three subtasks via a unified model architecture, which we refer to as general NER models (Li et al., 2020; Dai et al., 2020; Yan et al., 2021). Existing literature for general NER models fall into the following three categories: (1) span-based models (Yu et al., 2020a; ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) Bekoulis et al., 2018); (2) models based on carefully designed data structures like hyper-graphs and shift-reduce parsers (Dai et al., 2020; Wang et al., 2021b); (3) sequence-to-sequence (seq2seq) models (Yan et al., 2021; Zhang et al., 2022). Among the three branches of literature, the seq2seq models (Yan et al., 2021; Fei et al., 2021) have achieved SOTA performances. However, they organize target entities into a single sequence according to a predetermined order. This setting is against the intuition that the target entities are essentially an unordered set and results in an incorrect bias (entity-order confounder) to the model (Zhang et al., 2022). In addition, sequentially generating target entities suffers from two disadvantages: (1) low inference speed due to autoregressive decoding; (2) Error propagation, i.e., errors generated by the previous steps could misguide the current and future generation steps. In this paper, we propose a non-autoregressive generation (NAG) framework for named entity recognition, *NAG-NER* (as depicted in Figure 2). Given an input sentence, the framework first encodes the sentence and detects where and how many entities will start at each token of the input sentence. Then, it asks the decoder to generate the set of targeted entities accordingly. We conducted extensive experiments on three benchmark datasets (CADEC, ACE2004, CoNLL03) and two proprietary datasets we developed (referred to as CME 676 ![1_image_0.png](1_image_0.png) and QER). The results validate that our method can perform comparably to or outperform the previous generative NER models while achieving significant speedups. To summarize, our main contributions include the following: - We propose NAG-NER, a novel nonautoregressive entity generation framework for general NER tasks. Distinct from the seq2seq models, it avoids the entity-order confounder and error propagation by generating multiple entity sequences simultaneously with a pre-trained non-autoregressive generation model. - Experimental results show that our model achieves SOTA performances while being efficient. ## 2 Related Work Due to limited length, we include the related works of NAG in Appendix A. We also include a preliminary introduction of NAG in Appendix B. ## 2.1 Generative Ner Models Generative models are investigated to solve different types of NER tasks in a unified model framework. Straková et al. (2019) propose to transform the BILOU labels (Ratinov and Roth, 2009) of source tokens into a label sequence via heuristic rules. Athiwaratkun et al. (2020) propose an augmented natural language output format for flat NER tasks, where the type tags of words are placed along with the words to form a sentence-like target sequence. Tan et al. (2021) proposes to generate entities as a set. However, this model directly generates entity position and type information via a set generation framework, failing to employ the text generation capabilities of pre-trained generative models. Yan et al. (2021) combine pre-trained BART with a delicately-designed copying mechanism and achieve promising performance on a wide range of NER benchmarks. Fei et al. (2021) train an LSTM from scratch to generate the target sequence and devise a novel memory-augmented pointer mechanism to enhance the interactions between the current pointer and the prior recognized entity mentions. Lu et al. (2022) transforms different information extraction task into a structured extraction language and solve general information extraction tasks with a unified text-to-structure generation framework. Zhang et al. (2022) point out two kinds of incorrect bias (pre-context confounder, entity-order confounder) in the generative NER models and propose two data augmentation methods to address these biases, but this model still has to generate all the entities sequentially. Our NAG-NER framework contributes to the literature by (a) we generate all the entities of a given input sentence in parallel via a NAG model, resulting in significant speedup and avoiding the precontext confounder; (b) our framework bypasses the entity-order confounder since it generates a set of entities in a single forward pass. ## 3 Methods In detail, we formally introduce our framework (Figure 2). We uniformly formulate the task of recognizing flat, nested, and discontinuous entities as NAG-based entity sequence generation problems. We will take a pre-trained NAG model consisting of an encoder and a decoder as the model backbone. Denote the tokenized (Sennrich et al., 2016) input sentence with length L as S = [w0*, ..., w*L−1]. And the target output is the set of entity sequences {(ESi, Ti)} M i=1, where M is the number of entities, ESiis the i-th entity consisting of litokens, and Ti is the type label of entity i. ## 3.1 Encoder The pre-trained NAG encoder will encode the input sequence S and output the contextualized representations: $[\,\mathfrak{H}\,]$, ## H = **Encoder**(S), (1) where H ∈ RL×dand d is the hidden size of the NAG model. Since the transformer-based NAG model will provide the contextualized encoding of tokens, we use the representations of each word's first subword token as the vector representation of this word: vj = H[startj ], (2) where vj is the representation of the j-th word in the original input sequence, startj is the index of the first subword token of word j in sequence S. ## 3.2 Entity Start Classification To motivate the encoder to gain a deep understanding of the input sentence and provide information for the decoder, we ask the encoder to detect where an entity will start and how many entities will start at such a position.1 This task is formulated as a multi-class classification task on each word j predicting the number of entities nej that starts at word j: $$\mathbf{p}(\mathbf{n}\mathbf{e}_{j})=\mathrm{Softmax}(v_{j}W_{\mathbf{n}\mathbf{e}}+b_{n e}),\qquad(3)$$ nested NER tasks, multiple entities where p(nej ) ∈ ROmax+1, Omax is the maximum number of entities starting at the same word. Note that nej = 0 means that no entity will start at word j. This task is optimized with cross-entropy loss Lne. ## 3.3 Entity Generation As shown in Figure 2, we ask the NAG decoder to generate entity information for each word j of the input. The non-autoregressive generation (NAG) model is proposed (Gu et al., 2018) to speedup autoregressive generation, which removes the order dependency between target tokens Y and can generate tokens of the target sentence simultaneously given input X: $$\mathbf{P}_{N A G}(Y|X;\Theta)=\prod_{t=1}^{l}\mathbf{P}(y_{t}|X;\Theta),\quad\quad(4)$$ where l denotes the length of the target sentence. The decoder of the NAG model needs to know the targeted length before generation. A common practice is to treat length prediction as a classification task, using the information from the encoder's output to make predictions. However, following Qi et al. (2020), we will discard this length prediction task by using a unified length for the output sequence and use the first generated end-of-sentence token ⟨/s⟩ as the ending signal for the generated sequence. Denote the maximum entity length (in subword level) as lmax ∈ Z+. During training, we sort (in ascending order) the nej entities by their spanning length, that is, the length of the span that envelops this entity.2 For each word j and each n = 0, 1, 2*, ...,* nej − 1, we would like the decoder to generate entity (ESj,n, Tj,n). In the negative cases where no named entities are starting from word j, ESj,n will be the subtokens of word j, and Tj,n will be the non-entity tag ⟨O⟩. Thus, the input and target output of the decoder are: - Decoder input: a sequence of length lmax + 3 consisting of only the mask token ⟨m⟩ are fed into the decoder; - Targeted decoder output: the target output sequence is in the form of $$\langle s\rangle\;\mathrm{{ET}}_{j,n}\;\mathrm{{ES}}_{j,n}\;\langle/s\rangle,$$ $$({\boldsymbol{\Sigma}})$$ ⟨s⟩ ETj,n ESj,n ⟨/s⟩, (5) ${}^{2}$This is reflected on Figure 1, where the spanning length "Pain in hip" is smaller than that of "Pain in knees". of "Pain in hip" is smaller than that of "Pain in knees". where ESj,n is the subtoken sequences of this entity, and ETj,n is the token added to the NAG vocabulary that represents the entity class tag Tj,n. The non-entity tag O corresponds to the special token ⟨O⟩. Note that since entities are of different lengths, we will pad the target output sequence with the padding token ⟨pad⟩ to length lmax+3 if necessary. The model does not need to generate entity sequences on every word during inference. We can take advantage of the entity start classification module and decide which words are likely to be the starting words of named entities. Formally, with the threshold τs (0 < τs < 1) for entity start classification, $$S_{s t a r t}=\{j\mid\mathbf{p}(\mathbf{n}\mathbf{e}_{j}=0)<\tau_{s}\},$$ where S*start* is the collection of indexes of detected entity starting words, and the number of entities is obtained by nej ← arg maxnej p(nej ). Then we will generate entity sequences by feeding the decoder a sequence of length lmax + 3 consisting of only the mask token ⟨m⟩ for each j ∈ S*start* and each n < nej . After the decoder generates an output sequence, the entity tag token ETj,n is obtained as the second generated token, and the token sequence from the third position till the first ⟨/s⟩ token is the generated entity token sequence. If there is no ⟨/s⟩ token in the output sequence, all the output tokens starting from the third position will be considered the generated entity token sequence. Given a decoder input and targeted output sequences, we can calculate the generation loss Lg of NAG, which is the average cross-entropy loss on each token according to Equation 4. We will discard the losses from ⟨pad⟩ tokens. ## 3.4 Positional Embeddings For Entity Generation Note that the decoder receives input sequences consisting of only mask tokens ⟨m⟩, and it does not know where the entity starts and which contexts it should pay more attention to during generation. In addition, we should also inform the decoder about the number of entities starting from the same word so that the decoder can generate entities of different spanning lengths instead of generating the same entity. Thus, we introduce two positional embedding to the decoder's embedding layer: - **word start position embedding (WSPE)**: as depicted in Figure 2, all the mask tokens of the decoder input share the same word start position index, that is, startj , the index of the first subtoken of word j. Furthermore, the word start position embedding vector is obtained by looking up the positional embedding layer of the encoder. - **number-of-entities position embedding** (NEPE): for each n < nej , n represents the nth entity, and the n-th shortest entity starting from word j. We map n to a randomly initialized learnable embedding vector NEPEn. This positional embedding is also shared by all the tokens of the decoder input. These two positional embeddings will be added to the decoder's original embedding layer. ## 3.5 Overall Fine-Tuning Objective During fine-tuning of NAG, the whole framework of NAG-NER is optimized end-to-end, with the total losses of entity start classification and entity sequence generation: $${\mathcal{L}}={\mathcal{L}}_{g}+{\mathcal{L}}_{\mathrm{ne}}.$$ $$\left(7\right)$$ ## L = Lg + Lne. (7) 4 Experiments 4.1 Evaluation Datasets And Metrics To show that our proposed method can be used in various NER subtasks, we conducted experiments on three English open-sourced benchmark datasets (CADEC, ACE2004, CoNLL03) and two Chinese proprietary tasks (CME, QER). CoNLL03 and QER are flat NER tasks, CADEC is a discontinuous NER task, and ACE2004 contains nested entities. CME is a complex task containing both discontinuous and nested entities. We include introductions and statistics of the datasets in Appendix C. For evaluation, strict evaluation metrics are applied, where an entity is confirmed correct only if all of its words and its type label are recognized correctly. Precision (P), Recall (R), and Micro F1 score (F1) are reported in the results. ## 4.2 Implementation Details We employ the BANG model (Qi et al., 2020) as the backbone for English tasks while we pre-train a NAG model in Chinese based on the codebase of BANG on our corpus containing 120 million documents in Chinese. For fine-tuning on each task, the special tokens corresponding to the entity type labels (including the non-entity label ⟨O⟩) are added to the vocabulary, and their embedding vectors are randomly initialized. For optimization, we use the AdamW optimizer (Loshchilov and Hutter, 2018) with a linear learning rate schedule and 6% of the optimization steps as warm-up steps. After each epoch, we evaluate the fine-tuned model on the development set and save the model checkpoints. After fine-tuning ends, the best checkpoint will be evaluated on the test set, and the test result will be reported. Details of hyper-parameter tuning and settings are included in Appendix D. We report the average test performance on five random seeds. ## 4.3 Compared Methods We mainly compare our model with SOTA generative NER models listed in Section 2. We also compare our method with SOTA discriminative NER models. See Appendix E for an introduction to them. For a fair comparison, since our NAG model is in the base size, we run the baseline models with BERT-base (Devlin et al., 2019) (12 encoder layers) or BART-base (Lewis et al., 2019) (6 encoder layers and 6 decoder layers).3 Lu et al. (2022) is run with the implementation of PaddlePaddle4. All the baselines are run with their open-sourced codes with their suggested hyper-parameters. ## 4.4 Main Results Table 1 and Table 2 show the comparison between our model and other models in three benchmark datasets and two proprietary datasets. Results on the open-sourced benchmark datasets Table 1 demonstrates that on the benchmark datasets, our method has clear advantages over the previous SOTA generative methods on complex discontinuous or nested NER tasks CADEC and ACE2004. On these tasks, our method also outperforms the models designated for specific tasks, like Wang et al. (2021b). On the flat NER tasks, although the previous generative models slightly underperform the discriminative models, our method can obtain results comparable to the strong discriminative models, demonstrating the broad applicability of our method. | Model | CADEC | ACE2004 | CoNLL03 | |------------------------------------|---------|-----------|-----------| | F1 | F1 | F1 | | | Discriminative NER models | | | | | Tang et al. (2018) | 65.1 | - | - | | Dai et al. (2020) [ELMO] | 68.7 | - | - | | Wang et al. (2021b) [BERT-base] | 69.7 | - | - | | Yu et al. (2020a) [BERT-base] | - | 85.6 | 92.4 | | Li et al. (2020) [BERT-base] | - | 84.2 | 92.7 | | Xu et al. (2021) [BERT-base] | - | 85.0 | - | | Shen et al. (2021) [BERT-base] | - | 85.7 | 92.7 | | Akbik et al. (2019) [BERT-base] | - | - | 92.8 | | Wang et al. (2021a) [BERT-base] | - | - | 92.8 | | Set Generation NER models | | | | | Tan et al. (2021) [BERT-base] | - | 85.6 | 92.4 | | Generative NER models | | | | | Straková et al. (2019) [BART-base] | - | 84.3 | 92.4 | | Yan et al. (2021) [BART-base] | 68.7 | 85.2 | 92.5 | | Fei et al. (2021) [BART-base] | 70.6 | - | - | | Zhang et al. (2022) [BART-base] | 70.8 | 85.2 | 92.7 | | Lu et al. (2022) [BERT-base] | - | 85.3 | 92.3 | | NAG-NER (ours) | 71.3 | 85.9 | 92.8 | Table 1: Results on the three NER benchmark datasets. The results show that our NAG-NER method has clear advantages on complex NER tasks while performing comparably with the SOTA models on flat NER tasks. | Model | CME | QED | |------------------------------------|-------|-------| | F1 | F1 | | | Discriminative NER models | | | | Yu et al. (2020a) [BERT-base] | 88.9 | - | | Li et al. (2020) [BERT-base] | 87.8 | 94.5 | | Shen et al. (2021) [BERT-base] | 88.7 | 94.6 | | Akbik et al. (2019) [BERT-base] | - | 94.5 | | Wang et al. (2021a) [BERT-base] | - | 94.8 | | Set Generation NER models | | | | Tan et al. (2021) [BERT-base] | - | 93.8 | | Generative NER models | | | | Straková et al. (2019) [BART-base] | 86.4 | 93.9 | | Yan et al. (2021) [BART-base] | 88.6 | 94.2 | | Zhang et al. (2022) [BART-base] | 88.5 | 94.3 | | Lu et al. (2022) [BERT-base] | 88.8 | 94.4 | | NAG-NER (ours) | 89.6 | 94.7 | Table 2: Results on the two proprietary datasets, CME and QER. Results on the proprietary datasets The results on the our proprietary dataset (Table 2) lead to similar observations with Table 1. Our method outperforms the baseline methods on the complex task CME which contains both discontinuous and nested entities. In addition, the performance of our method on the flat NER task, QER, is also comparable to the strong discriminative baseline models. ## 4.5 Inference Efficiency We compare the inference efficiency of our method with the SOTA seq2seq ner model Yan et al. (2021) and the SOTA set generation NER model Tan et al. (2021) on two tasks: ACE04 and | Methods | QPS | | |-------------------------------|------------|-----------| | ACE2004 | CME | | | Yan et al. (2021) [BART-base] | 108 (1×) | 11 | | Tan et al. (2021) [BERT-base] | 227 (2.1×) | - | | NAG-NER (ours) | 205 (1.9×) | 63 (5.7×) | CME. We run each model repeatedly on a fixed batch of samples containing four sentences for a fair comparison. For ACE2004, the batch contains 86 tokens; for CME, the batch contains 892 tokens. The efficiency is measured on an NVIDIA RTX 3090 GPU. We report the average number of sentences processed per second (QPS) of each model in Table 3. As shown in Table 3, our method is significantly faster than the seq2seq NER model and only runs slightly slower than Tan et al. (2021). Note that our CME task has a much longer average sentence length and a larger number of entities per sentence. Thus the speedup effects of our NAGNER method on CME are much more significant than on the ACE2004 task. ## 4.6 Ablation Studies We conduct an ablation study on ACE2004 and CME to verify the effectiveness of different components of NAG-NER. We consider three different variations of our whole NAG-NER model whose results are presented in Table 4: - NAG-NER-1. In our main experiments (in Table 2), we utilize a Chinese BANG model to initialize our model for the CME and QER tasks. This BANG model is pre-trained on our Chinese corpus with 100 thousand steps under a batch size of 1024. We now substitute this pre-trained checkpoint with a less well-pretrained one (at 20 thousand steps). NAG-NER1 under-performs NAG-NER on the CME test set, demonstrating that the quality of the pretrained NAG models can directly affect the results of our method. - NAG-NER-2, which is to drop the entity start classification module. Thus, in this model, the decoder has to generate entity sequences on each word's starting token. After dropping this module, the F1 score drops slightly on both tasks, showing that this module is beneficial for filtering out noise and increasing the Table 4: Results of ablation study on the ACE2004 and CME tasks. | Methods | ACE2004 | CME | |-----------|-----------|-------| | F1 | F1 | | | NAG-NER | 85.9 | 89.6 | | NAG-NER-1 | - | 88.7 | | NAG-NER-2 | 85.3 | 88.9 | | NAG-NER-3 | 73.2 | 69.5 | | Dataset | Test set | Yan et al. (2021) | NAG-NER | |-------------|------------|---------------------|-------------| | ACE2004 | All | 85.2 | 85.9 (+0.7) | | Overlapping | 83.2 | 84.7 (+1.5) | | | CME | All | 88.6 | 89.6 (+1.0) | | Overlapping | 83.4 | 85.8 (+2.4) | | Table 5: Results of ablation study on the ACE2004 and CME tasks. precision of the decoder's generation outputs. - NAG-NER-3, which is to drop the WSPE and NEPE positional embeddings in Section 3.4. NAG-NER-3 can not obtain a reasonable performance, demonstrating the necessity of informing our decoder where the entity starts to generate the entity mentions correctly. ## 4.7 Error Analysis To further demonstrate the advantage of our method over Seq2Seq NER models, we now analyze how our model performs when dealing with overlapping entities. In Table 5, we report the F1 scores on the whole test set and the subset of overlapping entities for the ACE2004 and CME tasks. We can see that compared with Yan et al. (2021), our NAG-NER model significantly boosts the F1 score on the overlapping entities, showing that our method is effective in recognizing complex entities. ## 5 Conclusion In this work, we propose NAG-NER, a unified generative model for various NER tasks based on non-autoregressive generation (NAG). In NAGNER, different NER tasks are formulated as entity set generation tasks. We employ the NAG encoder to detect potential entity starts and the NAG decoder to efficiently decode entity sequences. Experiments on three benchmark NER tasks and two proprietary NER tasks demonstrate that our method can outperform baseline generative NER methods while achieving higher inference speed. We also conduct ablation studies to demonstrate the necessity of each module in our NAG-NER method. ## Limitations In this work, we develop a unified model framework that is applicable to different NER tasks. Through experiments, we show the effectiveness of our method on different NER tasks, both in English and Chinese. However, we recognize that our method is not tested on NER tasks where the input sequences are extremely long. In addition, our method is not tested on few-shot scenarios. We will investigate these issues in future work. ## Ethics Statement Our model is designated to recognize entities in input sequences. We use two groups of tasks. The three benchmark datasets CADEC, ACE2004, and CoNLL03 are widely studied in the literature, and our work does not introduce new ethical issues. Since the two proprietary datasets are all anonymized and only used for training models in our institution, no ethical concerns are included in our work. ## References A. Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In *North American Chapter of the Association for Computational Linguistics*. Ben Athiwaratkun, Cícero Nogueira dos Santos, Jason Krone, and Bing Xiang. 2020. Augmented natural language for generative sequence labeling. In *Conference on Empirical Methods in Natural Language* Processing. Yu Bao, Hao Zhou, Shujian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, and Lei Li. 2022. Glat: Glancing at latent variables for parallel text generation. *ArXiv*, abs/2204.02030. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. *Expert Systems with Applications*, 114:34–45. Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 5860–5870, Online. Association for Computational Linguistics. Keqi Deng, Zehui Yang, Shinji Watanabe, Yosuke Higuchi, Gaofeng Cheng, and Pengyuan Zhang. 2022. Improving non-autoregressive end-to-end speech recognition with pre-trained acoustic and language models. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8522–8526. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Hao Fei, Dong-Hong Ji, Bobo Li, Yijiang Liu, Yafeng Ren, and Fei Li. 2021. Rethinking boundaries: Endto-end recognition of discontinuous mentions with pointer networks. In *AAAI Conference on Artificial* Intelligence. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112– 6121, Hong Kong, China. Association for Computational Linguistics. Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In *International Conference on Learning Representations*. Jiatao Gu and Xu Tan. 2022. Non-autoregressive sequence generation. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics, ACL 2022 - Tutorial Abstracts, Dublin, Ireland, May 22-27, 2022, pages 21–27. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In *North American Chapter of the Association for* Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Annual Meeting of the Association for Computational Linguistics. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jianyun Nie, and Ji rong Wen. 2022. Elmer: A non-autoregressive pre-trained language model for efficient and effective text generation. *ArXiv*, abs/2210.13304. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 5849–5859, Online. Association for Computational Linguistics. Jindˇrich Libovický and Jindˇrich Helcl. 2018. End-toend non-autoregressive neural machine translation with connectionist temporal classification. *ArXiv*, abs/1811.04719. Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. 2022. Unified structure generation for universal information extraction. In *Annual Meeting of the Association for* Computational Linguistics. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Conference on Empirical Methods in Natural Language Processing. Weizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Dayiheng Liu, Weizhu Chen, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, Ming Zhou, and Nan Duan. 2020. Bang: Bridging autoregressive and nonautoregressive generation with large scale pretraining. ArXiv, abs/2012.15525. Lev-Arie Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Conference on Computational Natural Language Learning. Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Conference on Empirical Methods in Natural Language Processing. Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. *ArXiv*, cs.CL/0306050. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In *Proceedings of the 54th Annual* Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, and Weiming Lu. 2021. Locate and label: A two-stage identifier for nested named entity recognition. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2782–2794, Online. Association for Computational Linguistics. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pretraining for language generation. In *International* Conference on Machine Learning. Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Silvia Cascianelli, Giuseppe Fiameni, and Rita Cucchiara. 2021. From show to tell: A survey on deep learning-based image captioning. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45:539–559. Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In *Annual Meeting of the Association for Computational Linguistics*. Shuo Sun, Hongxu Hou, Nier Wu, Ziyue Guo, and Chaowei Zhang. 2020. Multi-reward based reinforcement learning for neural machine translation. In CCL. Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set network for nested named entity recognition. In *International Joint Conference on Artificial Intelligence*. Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using lstm-crf. *Wirel. Commun. Mob. Comput.*, 2018. Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, and Kewei Tu. 2021a. Improving named entity recognition by external context retrieving and cooperative learning. In Annual Meeting of the Association for Computational Linguistics. Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021b. Discontinuous named entity recognition as maximal clique discovery. In *Annual Meeting of the Association for* Computational Linguistics. Yongxiu Xu, Heyan Huang, Chong Feng, and Yue Hu. 2021. A supervised multi-head self-attention network for nested named entity recognition. In AAAI Conference on Artificial Intelligence. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various ner subtasks. In *Annual Meeting of the Association for Computational Linguistics*. Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020a. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470– 6476, Online. Association for Computational Linguistics. Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020b. Named entity recognition as dependency parsing. In Annual Meeting of the Association for Computational Linguistics. Chengchang Zeng, Shaobo Li, Qin Li, Jie Hu, and Jianjun Hu. 2020. A survey on machine reading comprehension: Tasks, evaluation metrics, and benchmark datasets. *ArXiv*, abs/2006.11880. Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, and Weiming Lu. 2022. De-bias for generative extraction in unified ner task. In Annual Meeting of the Association for Computational Linguistics. ## A Appendix Of Related Work A.1 Non-Autoregressive Generation Models Due to its advantages in efficiency, there is a wide range of studies for NAG models (Gu et al., 2018; Ghazvininejad et al., 2019; Qi et al., 2020). Gu et al. (2018) is the first to propose NAG paradigm to reduce the inference latency of text generation. NAG is widely studied in machine translation. Ghazvininejad et al. (2019) masks and predicts a fraction of tokens that the model is least confident about. Saharia et al. (2020) and Libovický and Libovický and Helcl (2018) use connectionist temporal classification to perform latent alignment in NAR models. Bao et al. (2022) employs the discrete latent variables to capture word categorical information and invoke an advanced curriculum learning technique, alleviating the multi-modality problem of NAG in machine translation tasks. Recently, several groups aim to apply NAG to a wider range of tasks. Qi et al. (2020) designs and pre-trains a monoligual Transformer model with multiple attention streams that can be used both as an AG model and a NAG model. They apply their pre-trained models on summarization, machine reading comprehension and dialogue response generation, and show that NAG models can achieve competitive performance with around 15 times speedup. Li et al. (2022) develops an early exiting based strategies for monoligual NAG pre-training. Our work complements the literature by: (a) we successfully apply NAG models in named entity recognition tasks; (b) we propose a span set generation task for pre-training a NAG model which is more suitable for downstream NER tasks. ## B Appendix: Preliminaries On Nag B.1 Autoregressive Generation The autoregressive generation (AG) models achieve the state-of-the-art performance on a wide range of text generation tasks like machine translation (Song et al., 2019; Sun et al., 2020), summarization (Lewis et al., 2019), image captioning (Stefanini et al., 2021). We now use machine translation to introduce the AG method. Given a source sentence X = (x1, x2*, ..., x*n) and the target sentence Y = (y1, y2*, ..., y*m), an AG model with parameters Θ decomposes the target distribution of translations according to the chain rule: $$\mathbf{P}_{A G}(Y|X;\Theta)=\prod_{t=1}^{m}\mathbf{P}(y_{t}|y_{<t},X;\Theta),\quad(8)$$ where y<t denotes generated previous tokens before the t-th position. During the training process, the AG model is usually trained via the teacherforcing strategy that uses ground truth target tokens as previously decoded tokens so that the output of the decoder can be computed in parallel. During inference, the AG model still needs to generate translations one-by-one from left to right until the token ⟨/s⟩ that represents the end of sentence. Although AG models achieve SOTA performances on text generation tasks, its autoregressive decoding method dramatically reduces the decoding speed and becomes the main bottleneck of its efficiency. In addition, some literature argues that autoregressive decoding is prone to error propagation Gu and Tan (2022). ## B.2 Non-Autoregressive Generation To improve the inference speed of AG models, the non-autoregressive generation (NAG) model is proposed (Gu et al., 2018), which removes the order dependency between target tokens and can generate tokens of the target sentence simultaneously: $$\mathbf{P}_{N A G}(Y|X;\Theta)=\prod_{t=1}^{m}\mathbf{P}(y_{t}|X;\Theta),\quad\quad(9)$$ where m denotes the length of the target sentence. Generally, NAG models need to have the ability to predict the length because the entire sequence needs to be generated in parallel. A common practice is to treat it as a classification task, using the information from the encoder's output to make predictions. Qi et al. (2020) discard the length prediction module and use the first generated end-ofsentence token ⟨/s⟩ as the ending signal for the generated sequence. Because NAG is only conditioned on source-side information, but AG can obtain the strong targetside context information provided by the previously generated target tokens, NAG models generally have a performance gap compared to AG models. NAG is first studied in machine translation and | Statistics | CADEC | ACE04 | CoNLL03 | | | | | | | |--------------------------|---------|---------|-----------|-------|------|------|-------|------|------| | Train | Dev | Test | Train | Dev | Test | | | | | | # Sentences | 6077 | 760 | 759 | 6200 | 745 | 812 | 14041 | 3250 | 3453 | | Avg sent. length | 14.1 | 14.2 | 14.2 | 22.5 | 23.0 | 23.0 | 13.7 | 13.5 | 13.6 | | # Entities | 5052 | 631 | 634 | 22204 | 2514 | 3035 | 23326 | 5902 | 5613 | | #types of entities | 1 | 1 | 1 | 7 | 7 | 7 | 4 | 4 | 4 | | # Nested entities | - | - | - | 10149 | 1092 | 1417 | - | - | - | | # Discontinuous entities | 543 | 64 | 68 | - | - | - | - | - | - | recently NAG is rapidly closing the performance gap against AG models via novel NAG-style pretraining (Qi et al., 2020; Bao et al., 2022). NAG is also applied to other generation tasks like summarizaiton (Li et al., 2022), automatic speech recognition (Deng et al., 2022). ## C Datasets C.1 Open-Sourced Benchmark Datasets Discontinuous NER datasets We follow Dai et al. (2020); Yan et al. (2021) to use the CADEC dataset5in our experiment. Since only the Adverse Drug Events (ADEs) entities have discontinuous annotation, only this type of entity is considered and the other 4 types of entities are discarded. Nested NER datasets For Nested NER subtask, we adopt the ACE20046 dataset. This dataset contains corpuses of newswire, broadcast news and telephone conversations. It contains 7 entity categories: "PER", "ORG", "LOC", "GEP", "VEH", "WEA" and "FAC". In experiment conducted on ACE2004, we use the same data split as Muis and Lu (2017); Yu et al. (2020a), the ratio between train, development and test set is 8:1:1. Flat NER datasets We adopt the CoNLL03 (Sang and Meulder, 2003) datasets. It is a flat NER dataset with a news corpus and has annotated 4 types of entities as "PER", "LOC", "ORG" and "MISC". For CoNLL03, we follow Lample et al. (2016); Yu et al. (2020a) to train our model on the concatenation of the train and development sets. ## C.2 Our Proprietary Datasets In this work, we run experiments on two of our proprietary datasets we collect to develop our information extraction or question answering systems. Chinese medical entity (CME) dataset This dataset is collected from medical records. The col-5https://data.csiro.au/collection/10948v003 6https://catalog.ldc.upenn.edu/LDC2005T09 lection of these medical records are agreed by the owner, and the data are completely anonymized before being used by the data scentists. This dataset contains 15 entity types, and contains both nested and discontinuous datasets. Query entity recognition (QER) dataset This dataset is collected from queries from an online question-answering system. The data collection is agreed by all the users. This dataset considers 7 types of entities and it is a flat NER task. Statistics of the two datasets are listed in Tabel 7. ## D Appendix For Experimental Settings D.1 Hyper-Parameters Settings We run our experiments on NVIDIA Tesla V100 GPUs. The maximum entity length lmax is set to 8 for all the three Enghlish benchmark datasets, and 16 for our two proprietary tasks. The maxmimum number of entities starting from the same word Omax is set to 5. We manually tune the hyperparameters including maximum learning rate (maxLR), epochs, maximum tokens per batch, dropout rate, threshold τs for each dataset. Specifically, we trial different values of each hyper-parameter within the hyper-parameter search space for ten times and the hyper-parameter values that results in the best performance on the development set are chosen. The search space of each hyper-parameter and the final hyper-parameter configuration are reported in Table 8. ## E Appendix: Introduction To Discriminative Ner Models Models for Discontinuous NER Tang et al. (2018) use LSTM-CRF to recognize continuous and discontinuous adverse drug reaction mentions. Dai et al. (2020) is a transition-based method that utilizes shift-reduce parsers to identify discontinuous entities. Wang et al. (2021b) solve discontin- | Statistics | CME | QER | | | | | |--------------------------|---------|--------|--------|-------|------|------| | Train | Dev | Test | Train | Dev | Test | | | # Sentences | 84328 | 10540 | 10540 | 54390 | 6800 | 6800 | | Avg sent. length | 231 | 232 | 232 | 14.3 | 14.1 | 14.5 | | # Entities | 1813052 | 227664 | 227652 | 70712 | 8845 | 8862 | | #types of entities | 15 | 15 | 15 | 7 | 7 | 7 | | # Nested entities | 112458 | 13463 | 12968 | - | - | - | | # Discontinuous entities | 63281 | 6590 | 6438 | - | | | | Hyper-param | Search space | CADEC | ACE2004 | CoNLL03 | CME | QER | |---------------|---------------------------|---------|-----------|-----------|-------|-------| | Epochs | { 30, 50, 75 } | 75 | 50 | 75 | 30 | 50 | | Max-LR | {1e-5, 2e-5, 5e-5, 1e-4} | 2e-5 | 5e-5 | 2e-5 | 1e-5 | 2e-5 | | batch size | {1, 2, 4, 8, 16} | 4 | 8 | 4 | 16 | 16 | | dropout rate | {0.1, 0.2, 0.3, 0.5} | 0.3 | 0.3 | 0.1 | 0.1 | 0.2 | | τs | {0.2, 0.3, 0.5, 0.7, 0.9} | 0.2 | 0.5 | 0.3 | 0.5 | 0.3 | Table 7: Statistics of the two proprietary NER datasets. Table 8: The hyper-parameter settings for each task in our experiments. uous NER via the maximal clique discovery algorithm based on graph theory. Models for Nested NER Yu et al. (2020b) formulate NER as the dependency parsing task. Li et al. (2020) adopt the pointer-based span extraction strategy widely adopted in machine reading comprehension (Zeng et al., 2020). Xu et al. (2021) treat nested NER tasks as multi-class classification of spans and solve it with a multi-head self-attention mechanism. Shen et al. (2021) is a two-stage entity extraction model which first generates candidate spans and then labels the boundaryadjusted span proposals with the corresponding categories. Models for Flat NER Akbik et al. (2019) dynamically aggregate contextualized embeddings of each encountered string and use a pooling operation to obtain a contextualized word representation from all contextualized instances. Yu et al. (2020b) and Li et al. (2020) can be used to solve both nested and flat NER tasks. Wang et al. (2021a) use the input sentence as a query to retrieve external contextual information with a search engine and concatenate the sentence with external contexts.
kakkar-etal-2023-search
Search Query Spell Correction with Weak Supervision in {E}-commerce
https://aclanthology.org/2023.acl-industry.66
Misspelled search queries in e-commerce can lead to empty or irrelevant products. Besides inadvertent typing mistakes, most spell mistakes occur because the user does not know the correct spelling, hence typing it as it is pronounced colloquially. This colloquial typing creates countless misspelling patterns for a single correct query. In this paper, we first systematically analyze and group different spell errors into error classes and then leverage the state-of-the-art Transformer model for contextual spell correction. We overcome the constraint of limited human labelled data by proposing novel synthetic data generation techniques for voluminous generation of training pairs needed by data hungry Transformers, without any human intervention. We further utilize weakly supervised data coupled with curriculum learning strategies to improve on tough spell mistakes without regressing on the easier ones. We show significant improvements from our model on human labeled data and online A/B experiments against multiple state-of-art models.
# Search Query Spell Correction With Weak Supervision In E-Commerce Vishal Kakkar1, Chinmay Sharma2, Madhura Pande2, **Surender Kumar**2 1Microsoft India 2Flipkart Internet Private Limited vishalkakkar90@gmail.com {chinmay.sharma,madhura.pande,surender.k}@flipkart.com ## Abstract Misspelled search queries in e-commerce can lead to empty or irrelevant products. Besides inadvertent typing mistakes, most spell mistakes occur because the user does not know the correct spelling, hence typing it as it is pronounced colloquially. This colloquial typing creates countless misspelling patterns for a single correct query. In this paper, we first systematically analyze and group different spell errors into *error classes* and then leverage the stateof-the-art Transformer model for contextual spell correction. We overcome the constraint of limited human labelled data by proposing novel synthetic data generation techniques for voluminous generation of training pairs needed by data hungry Transformers, without any human intervention. We further utilize weakly supervised data coupled with curriculum learning strategies to improve on tough spell mistakes without regressing on the easier ones. We show significant improvements from our model on human labeled data and online A/B experiments against multiple state-of-art models. ## 1 Introduction An incorrectly spelt search query can return irrelevant products in e-commerce which hurts both the business and experience for a user who is unable to find the intended product. As per the latest English Proficiency Index report of written test data from 100 countries, 1 only 29% countries are proficient in English. Our platform operates in Asian countries like India which are ranked low in this survey. As per the latest Indian census only ∼10% of the Indian population is versed in the English language thus causing high spell errors in the user queries. Spelling errors are generally classified as - typographic and cognitive (Toutanova and Moore, 2002). Typographic errors emanate from the typing mistakes on the keyboard which worsens on mobiles due to smaller keypads. Cognitive 1https://www.ef.com/wwen/epi errors happen when a user does not know how to correctly spell a word. This leads to phonetic errors like "cenityjer" for "sanitizer". Edit-distance2 based spell corrections at run-time are computationally expensive for higher edit distances at web-scale. Also, this approach typically works at an individual word level (Norvig, 2009; Garbe, 2021; Whitelaw et al., 2009) is not able to identify contextual mistakes like "greeting cart". Human labeling being expensive and time-consuming, these methods use a large amount of user query reformulations as training data for learning. Most of the web search work relies on this for generating correct-incorrect word pairs(Whitelaw et al., 2009; Gao et al., 2010). Query reformulations alone fail to cover all kinds of errors like phonetic (cognitive) ones - "metras" vs "mattress" or edit/phonetic+word-compounding like "ball pen" vs "bolpan" when the user herself does not know the correct spelling. Fig. 1 shows the distribution of different types of spell errors on our platform over a sample of ∼ 23k queries as classified by human judges. Inspired by the low-resource machine translation research, the latest spell systems create synthetic data to learn Neural (Zhou et al., 2017; Jayanthi et al., 2020) and statistical (Brill and Moore, 2000; Cucerzan and Brill, 2004) models for spell correction. To solve at internet scale, 3 billion search queries a month from over 450 million users, we too create a large amount of synthetic and user feedback data. Given significant colloquial phonetic mistakes (1), we develop a novel way of generating phonetic mistakes besides edit-distance and compounding ones. Along with the user query reformulations, the user clicks on the spell-corrected queries also provides another source of noisy labeled data. Curriculum learning (Bengio et al., 2009; Elman, 1993) has shown to generate robust models which show the improvements in their learning ability 2https://en.wikipedia.org/wiki/Levenshtein_distance ![1_image_0.png](1_image_0.png) when they first learn on simpler tasks followed by tougher tasks. We devise a few simple curriculums to improve accuracy on tougher mistakes. Using Transformers, the main contributions of our work are: (i) Generating large synthetic and weakly supervised labelled data for different spell mistakes including novel deep learning models for phonetic mistakes. (ii) Curriculum learning to improve on tough error classes like edit/phonetic+word-compounding without degrading performance on the easier ones. We report the effectiveness of our approach ReparoS (Reparo-Spell) through offline evaluation as well as online A/B experiment on user queries with significant improvements. In the following sections, we discuss related work followed by approach, the experiments, and finally the results. ## 2 Related Work Context-free word-based spell checkers in web search have been commonly implemented in two ways: edit distance (Damerau, 1964) and statistical noisy channel. Damerau-Levenshtein edit distance finds the correct words by editing 'k' characters of the input word. On the other hand, statistical noisy channel methods (Kernighan et al., 1990) assume that the user inadvertently introduces some noise through keyboard errors. Brill and Moore improves the standard noisy channel model. Rulebased systems like Soundex(Knuth, 1973) and Metaphone(Atkinson, 2009) generate a phoneme sequence for a given word to match against the phoneme sequence of the correct word. Contextaware spell checkers (Whitelaw et al., 2009) incorporate the surrounding words to improve the correction. However, this approach still corrects each word individually and is not able to address wordcompounding spell mistakes. Machine Translation ( (Hasan et al., 2015)) based approaches treat a correct query as a translation of the misspelled query. Zhou et al. use an RNN with encoder-decoder architecture that improves upon the statistical approaches. To avoid expensive human-labeled data, it's become common to generate copious amounts of synthetic labeled data (Etoori et al., 2018; Jayanthi et al., 2020) for model training. NeuSpell (Jayanthi et al., 2020) formulates spell correction as a sequential labeling problem where a correct word is labelled as itself and a misspelled token is labeled as its correction. These approaches suffer due to synthetic data being mainly edit-distance based errors. Simple phonetic based corrections have been explored as well (Yang, 2022), (Brill and Moore, 2000). NeuSpell also ignores the compounding errors which form more than 25% of the spell errors. Our approach overcomes these by using state-of-art context-aware Transformer models with curriculum learning, user feedback-based data and synthetic data sets for different types of spell mistakes. In the following sections we describe data generation methods for training, model details followed by experiments and discussion. ## 3 Data Generation 3.1 Spell Error Classes In this section, we describe various error classes based on the patterns we observe in our ecommerce search logs. Broadly, any spell error can be classified as below: Edit: These are induced by performing the following character operations on a correct word: (i) deletion ("nike" → "nke"), (ii) adjacent swap("nike" → "nkie"), (iii) replacement like from neighbouring characters on the keyboard ("nike" → "bike"), (iv) insertion ("nike" → "nioke"). Compounding: These errors are induced by the wrong usage of space ("back pack" vs "backpack"). Phonetic: When users write a query based on their pronunciation. This challenging class of errors (37%) is full of variations due to accents influenced by the regional, colloquial languages of the users ("shart", "sart" → "shirt"). {Edit/Phonetic}+Compounding: This is when edit or phonetic errors exist simultaneously with the compounding errors. For example, "air cooler" → "yercular", "dry fruits" → "drayfrut" Figure 1 shows the distribution of various error classes based on monthly search queries received ![2_image_1.png](2_image_1.png) on our platform. Phonetic errors are highest and can have numerous forms like *shirt* spelt shart, sart etc based on user's regional accent. Contextual spell correction is important too, for example, "pan" in "bol pan blue", needs correction to "pen" but not in "tea pan". ## 3.2 Synthetic Data Generation We generate training data based on the error classes defined in the previous section. From our e-commerce search logs, we extract a seed set of clean (correctly spelt) queries on which we induce all types of errors. The clean queries are selected on the basis of their high volume and query CTR (Click Through Rate on the search result page). Errors are induced at word-level and then subsequently put back in the original query to generate the incorrect-correct training query pairs. Edit-Distance Error: This is done by using one or more of the operations as discussed in detail in the section 3.1. We want frequent errors made by users to have a higher representation in our training data. Hence, we replace the correct word in query with its incorrect form with a probability proportional to its Brill-Moore Error Model(EM) score (Cucerzan and Brill, 2004). Error Model (EM):We first acquire training triplets (intended word, observed word, frequency) from the user query logs (Whitelaw et al., 2009). Given a target correct word w and input word s, we then use this noisy-channel word error model to compute the probability P(s|w) as described by Brill and Moore. Phonetic Error: We leverage the fact that the users do a mental transliteration of the word from their native pronunciation to the English (Suzuki and Gao, 2012; Boyd, 2009). Although we focus on Hindi script here as it is the native language of 57% of the Indian population, this approach generalizes across any language. We first use our multilingual e-commerce product catalogue data that is already translated from English to Hindi and ![2_image_0.png](2_image_0.png) extract the transliterated word-pairs for particular fields like brand names and model names. We next train a character level sequence-to-sequence translation models for the transliteration tasks (English to Hindi and Hindi to English) using Transformer (Vaswani et al., 2017) architecture. To generate synthetic phonetic word-level mistakes, we first transliterate the user typed English word into a Hindi (Devanagari 3script) word(s) using the transliteration model described above. We then augment the transliterated Hindi word(s) and generate more noisy Hindi candidates by adding the most commonly occurring Hindi spell mistakes in the Devanagari script. These language-specific mistakes are learned separately from Hindi search queries (user query reformulations in the same user session). These native words are then back transliterated to English. Figure 2 shows the an example of how different misspelt forms of the word "mobile" are obtained via this method. Besides, transliteration we also use Soundex (Knuth, 1973), Metaphone (Atkinson, 2009) to generate phonetic candidates. In only edit-distance based erroneous word generation, the number of incorrect candidates explodes combinatorially as the length of the input word and number of edits increase which prevents generating many potential incorrect user spellings. Also, the random edit distance mistakes can generate a lot of garbage mistakes and negatively impact the model learning. The data generated by our phonetic method thus addresses these problems in a principled manner mimicking the user mistakes. Spell Query Chain Errors: Many a times users type an incorrect spelling and subsequently correct themselves in just the next query they type. We make use of such query chains to get the spell reformulations. To ensure minimal noise we use the LLR and PMI (Jones et al., 2006) measures 3https://en.wikipedia.org/wiki/Devanagari on co-occurring queries along with other checks like CTR of input and target query. We add edit-distance based guardrails to avoid generating drastically different query. Besides edit-distance query pairs, we also obtain *compounding* [("pendrive","pen drive"), ("iphonepro", "iphone pro")] errors here. Note that this method mines the user logs for the entire pair of *(incorrect, correct query)* which is unlike the earlier methods that induce errors on correct queries. Edit/Phonetic+Compounding Error: This method combines the edit/phonetic errors that we described earlier and compounds the words. For example, for a correct bi-gram compound word "ball pen" this method generates "bolpan" from the individual mistakes on unigram tokens "ball" and "pen" which intermediately generates mistakes such as "bol" and "pan" respectively. These tokens are then joined to obtain the final incorrect unigram form "bolpan". We finally use all the correct to incorrect word dictionaries generated above and use them to create noisy/misspelled queries for a given set of clean queries. We also collate a substantial chunk of (correct, correct) query pairs so that the model learns not to overcorrect (Zhu et al., 2019; Movin, 2018) in cases when it is undesired. ## 3.3 Weakly Supervised Data Another set of data is collected from the interactive user click feedback. These are user queries where our spell system intervenes to correct the input and generates a potentially correct query to fetch the search results. We collect the (user query, system corrected query) to form a training pair based on the CTR of the corrected query. Note that this data is be obtained after deploying one version of the spell model. With regular model retraining, this method also helps us to continuously learn on changing query patterns and mistakes. ## 4 Model And Training Details We formulate the problem of query correction as a Neural Machine Translation (NMT) problem (Zhou et al., 2017), where for a source sequence (incorrectly spelt query) the task is to predict the spell corrected query. An NMT model learns the conditional probability p(y1*, .., y*T ′ |x1*, .., x*T ) where (x1*, .., x*T ) is the input query and (y1*, .., y*T ′) is the target correct query. Note that the input query length T may differ from the output length T ′due to splitting mistakes (space as a character) by users. We use Transformer (Vaswani et al., 2017) neural architecture that is superior (as shown in results section) due to bidirectional self-attention, multi-head attention and uses best neural tricks of batch normalization, resnets with encoder and decoder layer. Our choice to use Transformer was also because we found pre-trained (on in-domain data) models like BART (et al, 2019) (combining BERT (Devlin et al., 2018) and GPT) was slightly inferior to simple Transformer model which is most likely due to BART/BERT kind of models requiring longer contexts. Hyperparameter and latency details are included in the Appendix. ## 4.1 Curriculum Learning Bengio et al. demonstrate the effectiveness of curriculum learning for neural language modeling tasks. Our results demonstrate that it can also help in the spell correction task. We create a curriculum strategy of first training on noisier but simpler data of synthetic query pairs. This is followed by fine-tuning the model on a smaller number of more complex errors ({edit+phonetic} with compounding). Another curriculum is fine-tuning with weakly supervised query pair data obtained from user click feedback which is also relatively cleaner. After training on noisier synthetic data, as a next step of the curriculum, we fine-tune our model on this cleaner weakly supervised data. We present the improvements brought in by the curriculum learning in the results section as compared to learning a model using all the data at once without needing any human labelled data. ## 4.2 Training And Eval Datasets Our e-commerce catalog has ∼50 heterogeneous categories like clothing, electronics etc. For each of these categories, we glean top-20 percent queries (by frequency and click rates) over one month and treat them as clean queries. This created a seed set of 300K clean queries which we fed into the synthetic data generation pipeline described in section 3.2. We generate ∼270M synthetic query pairs with all kinds of mistakes. In theory, user queries may have multiple misspelled words but our analysis showed that ∼90% queries have mistakes in up to two words per query which we incorporated in our synthetic data generation pipeline too. To obtain the weakly supervised data, we first deployed | Model | Training Data | Imp | Reg | |-------------------------------------------------------------|-----------------------------------------------------------|-------|-------| | GoogleSWBS (statistical web search) (Whitelaw et al., 2009) | in-house synthetic data | 33.96 | 82.45 | | Seq2Seq-BiRNN (Zhou et al., 2017) | in-house synthetic data | 34.86 | 91.1 | | NeuSpell-BERT (Jayanthi et al., 2020) | neuspell data | 12.19 | 58.55 | | NeuSpell [in-domain BERT] | neuspell data generation methods applied on in-house data | 25.3 | 83.6 | | NeuSpell [in-domain BERT] | in-house synthetic data | 32.6 | 84.2 | | ReparoS-Base | in-house synthetic data | 37.63 | 90.72 | | ReparoS-C1(fine-tune ReparoS-Base) | ed/ph + comp | 42.81 | 86.63 | | ReparoS-C2(fine-tune ReparoS-C1) | user click feedback (weak supervision) | 46.09 | 91.62 | the first version of the model and logged the user interaction with the model output. Whenever the system spell corrects a given user query and user accepts it by clicking on the products (indicated by query CTR) from the altered query, we consider that as a positive correction. We collected this data for a month and used the same to fine-tune the subsequent versions of our model. This fine tuning process allows for continual learning of the model from user feedback. A more detailed continual learning with knowledge distillation is our future area of work. For evaluation, we obtain 72K human labelled query pairs. This data is created by stratified sampling from unique head/tail queries (queries sorted in descending order of frequencies in a month and split into 2 equal quantiles: head and tail). These are then labeled by the human judges who provide the corrected queries for the input set. We measure the performance separately on 2 types of sub-datasets called Regression and Improvement. The regression set consists of 90% head and 10% tail queries and is hence predominantly clean. This ratio is inverted for the improvement set where 90% are tail (tough queries) and has a lot of spell mistakes. ## 5 Results And Discussion We conduct both offline and online evaluation of our models. Offline evaluation: Table 1 shows the results of multiple baselines on the Improvement and Regression data sets. Row 1 compares the performance of ReparoS with a statistical model GoogleSWBS (Whitelaw et al., 2009) that corrects at a word-level. We improved this model to correct the entire query using beam search (Freitag and Al-Onaizan, 2017) (to avoid combinatorial explosion of candidates) and a language model. In Row 2, Zhou et al. train a bidirectional RNN based sequence to sequence model with encoder/decoder architecture for NMT. Row 3-5 has NeuSpell (Jayanthi et al., 2020) that is bidirectional encoder-only model ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) with sequence labeling task for each input token. Jayanthi et al. provide a toolkit where different models (ELMO, CNN-LSTM, BERT) can be plugged while the last two layers remain constant. We chose their best performing variant with BERT, i.e., Neuspell-BERT for comparison. In Row 3, we evaluate unmodified NeuSpell model on our evaluation sets and observed poor performance. To provide a better domain adaptation, we plugged in our in-house trained BERT model that is pre-trained on e-commerce search queries. Row 4, 5 show the performance of this model. In Row 4, we use the data generation methods suggested by Jayanthi et al. on our data and then train this model. Although we see an increase in the numbers but still not better than the other models. Row 5 shows the results when we train this NeuSpell model plugged in with in-domain BERT with our data generation techniques. This also is the best performing configuration among all the NeuSpell experiments. The results also demonstrate the improvements due to our data generation methods as the seed queries/words for Row 4 and 5 were same. Additionally, we observe that seq-to-seq model (RNN, Transformer) is better than sequence labeling one. Another limitation of NeuSpell due to sequence labeling is that it can't handle compounding errors like iphonepro to iphone pro. It's evident that our diverse synthetic data generation techniques are effective and lead to significant improvement even with a simple 1 layer enc/dec (ReparoS-Base) transformer compared to deeper pre-trained models like BERT and NeuSpell. Adding candidates from our novel phonetic transliteration model was beneficial and led to a total absolute gain of 2-3% at query level consistently across the models and 7.5% accuracy improvement at the word candidate level. Effect of curriculum learning: While ReparoSBase itself is good than the competent baselines, ![5_image_0.png](5_image_0.png) our error analysis showed that it still couldn't address the complex edit/phonetic + compounding spell errors. This led us to design a few new curricula to improve. Our first curriculum ReparoSBase-1, was to simply add more of complex edit/phonetic+compounding training samples. This resulted in marginal improvement over ReparoSBase. However, with a different curriculum of only fine-tuning ReparoS-Base on tougher mistakes (model ReparoS-C1) we observe that the performance increases significantly on the Improvement Set where most of the mistakes lie, however, there's a drop in the Regression Set performance when compared to the ReparoS-Base and ReparoSBase-1. On analyzing this further, we observed that this was due to the over-correction problem where the model is aggressively altering correct queries in the regression set. Hence, we change the curriculum and in ReparoS-C2 we find that a further fine-tuning on weakly supervised user feedback data improves upon all the variants significantly on both the data sets. This is intuitively due to relatively more frequent queries in the Regression Set on which receiving implicit user-feedback through clicks is possible at scale. Thus adding this new curriculum helped acheive the best performance. Online evaluation: In production, we adopt a 2-step architecture by adding an ML ranker (Yang, 2022) that does the final candidate selection (from multiple candidates from the ReparoS/GoogleSWBS). This multi-stage setup empirically produced better results than just finetuning the NMT model since top-10 accuracy of ReparoS-C2 is 76.31% on Improvement Set and 98.08% on Regression Set. This helped all the models (and ReparoS more due to their better top-k accuracy) and is removed from results discussion for brevity. Table 2 reports the online performance of ReparoS-Base against its control bucket of GoogleSWBS (our first deployed in-house production model) and later ReparoS-C2 against the control bucket of ReparoS-Base. In both the A/B tests, 20% of the users were randomly assigned to each bucket (control and treatment) and the experiments were run for 3 weeks each to achieve statistical and practical significance. ReparoS-C2 is the currently deployed model serving the entire search traffic of our app for more than 9 months. At our scale, 0.01% is quite a significant change in PageCTR. Both ReparoS-Base and ReparoS-C2 show a reduction in click backs while increasing spell system coverage as well as the Search results page CTR. ReparoS-C2 also reduced the Null Searches drastically while ReparoSBase improved the number of cart adds by the users. These results demonstrate the effectiveness of both the versions of ReparoS across multiple user and business metrics. ## 6 Conclusion We presented the generation of large synthetic and weakly supervised labeled data for different types of user spell mistakes including a creative deep learning model to generate the phonetic mistakes. We then presented a sequence-to-sequence deep Transformer based Spell model with curriculum learning on tough spell mistakes and user feedback data that demonstrates superior performance than state-of-the-art statistical and neural spelling correction models. Our solution is currently deployed on an India's e-commerce platform and serves over billions of queries per month from over 450 million users across 100% zip codes of India. With new users with varying literacy rates joining our platform every day, spell correction remains a tough problem to solve. We found that deeper NMT models result in better performance but are impractical (in the current state due to higher latencies) to deploy in production to serve real-time user requests. Hence our future work is to focus on model pruning, quantization, and knowledge distillation and continual learning to reduce latencies for deployment and improve upon our current system. ## 7 Acknowledgements We thank our colleagues Samir Shah, Pooja Kushalappa, Shubham Kumar Rastogi, Adit Mittal, Rikkin Majani for stimulating discussions and critical feedback. We also thank our human annotators for their help in labelling and evaluation. ## References K Atkinson. 2009. Gnu aspell. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. ICML '09, page 41–48, New York, NY, USA. Association for Computing Machinery. Adriane Boyd. 2009. Pronunciation modeling in spelling correction for writers of english as a foreign language. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium, pages 31–36. Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th annual meeting on association for computational linguistics, pages 286–293. Association for Computational Linguistics. Silviu Cucerzan and Eric Brill. 2004. Spelling correction as an iterative process that exploits the collective knowledge of web users. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 293–300, Barcelona, Spain. Association for Computational Linguistics. Fred J. Damerau. 1964. A technique for computer detection and correction of spelling errors. *Commun.* ACM, 7(3):171–176. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Jeffrey L. Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48:71–99. Mike Lewis et al. 2019. BART: denoising sequenceto-sequence pre-training for natural language generation, translation, and comprehension. *CoRR*, abs/1910.13461. Pravallika Etoori, Manoj Chinnakotla, and Radhika Mamidi. 2018. Automatic spelling correction for resource-scarce languages using deep learning. In Proceedings of ACL 2018, Student Research Workshop, pages 146–152. Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 56–60, Vancouver. Association for Computational Linguistics. Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based system for search query spelling correction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 358–366, Beijing, China. Coling 2010 Organizing Committee. Wolf Garbe. 2021. Symspell: Spelling correction and fuzzy search: 1 million times faster through symmetric delete spelling correction algorithm. Saša Hasan, Carmen Heger, and Saab Mansour. 2015. Spelling correction of user search queries through statistical machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 451–460. Sai Muralidhar Jayanthi, Danish Pruthi, and Graham Neubig. 2020. NeuSpell: A neural spelling correction toolkit. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online. Association for Computational Linguistics. Rosie Jones, Benjamin Rey, Omid Madani, and Wiley Greiner. 2006. Generating query substitutions. In Proceedings of the 15th International Conference on World Wide Web, WWW '06, page 387–396, New York, NY, USA. Association for Computing Machinery. Mark D. Kernighan, Kenneth W. Church, and William A. Gale. 1990. A spelling correction program based on a noisy channel model. In Proceedings of the 13th Conference on Computational Linguistics - Volume 2, COLING '90, page 205–210, USA. Association for Computational Linguistics. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. In *Proceedings of ACL 2017, System Demonstrations*, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Donald Knuth. 1973. The art of computer programming: Volume 3, sorting and searching. pages 391–392. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. *arXiv preprint arXiv:1804.10959*. Maria Movin. 2018. Spelling correction in a music entity search engine by learning from historical search queries. Peter Norvig. 2009. Beautiful data. page 234–239. Hisami Suzuki and Jianfeng Gao. 2012. A unified approach to transliteration-based text input with online spelling correction. Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 144–151, USA. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In *Advances in neural information processing systems*, pages 5998–6008. Casey Whitelaw, Ben Hutchinson, Grace Y. Chung, and Gerard Ellis. 2009. Using the web for language independent spellchecking and autocorrection. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, EMNLP '09, page 890–899, USA. Association for Computational Linguistics. Fan et al Yang. 2022. Spelling correction using phonetics in E-commerce search. In *Proceedings of the* Fifth Workshop on e-Commerce and NLP (ECNLP 5), Dublin, Ireland. Association for Computational Linguistics. Yingbo Zhou, Utkarsh Porwal, and Roberto Konow. 2017. Spelling correction as a foreign language. arXiv preprint arXiv:1705.07371. Canxiang Zhu, Zhiming Chen, Yang Liu, Juan Hu, Shu qiong Sun, Bixiao Cheng, Zhen-dong, and Yang. 2019. Automatic query correction for poi retrieval using deep and statistical collaborative model. - PageCTR: Click through rate of Search Results Page P ageCT R = \#P ages with at least 1 *click* \#*T otal P ages shown* - CartAdd: Number of products per user visit (to the search page) being added by the users to the cart to purchase CartAdd = \#*Search V isits with product added to cart* \#*T otal user visits to search page* - NullSearch: It represents ratio of search result page with no results to total search results pages shown NullSearch = \#*P ages with no results* \#*T otal P ages shown* - Corrections: It represents the number of times spell system changed a user query and has to be looked in conjunction with the other metrics. - Clickback: It measures events where user tells the system to show results for his/her original query by clicking 'show results instead for <original query>'. Hence, a reduction in Clickback and NullSearch denotes improvement. ## 8 Appendix 8.3 A/B Metrics 8.1 Experimental Setup | Model | CPU | GPU | Imp | Reg set | |-----------------------------------------------------------------------------|-------|-------|-------|-----------| | time | time | | | | | NeuSpell[in-domain BERT] (12-encoder layers) | 71 | 20 | 32.6 | 84.2 | | ReparoS-C2 (1-encoder and 1-decoder layer) 6.67 | 7.85 | 46.09 | 91.62 | | | 4-decoder layers) | 16.43 | 18.19 | 51.27 | 92.83 | | ReparoS-C2 (4-encoder and ReparoS-C2 (6-encoder and 6-decoder layers) 56.08 | 27.92 | 52.6 | 93.14 | | Table 3: Accuracy(%) vs latency trade-off as depth of the models increase. Time is reported in ms. 12 layer in-domain BERT to match NeuSpell's inbuilt out-of-domain BERT which also has 12 layers of encoders. For ReparoS, the number of encoder and decoder layers were always kept equal. Due to higher latencies on CPUs (the typical economical serving infrastructure of choice) observed on deeper models, we eventually chose ReparoS-C2 with one layer of encoder/decoder. For model training, we used GPU set-up (NVIDIA Tesla V100-SXM2) while for inferencing in the user path we used CPU set-up (Intel x_86, 64 bit, 2.1GHz, VM with KVM Hypervisor). After hyperparameter tuning, we used 8 attention heads, and 128 hidden dimensions with Adam Optimizer with learning rate set to 1.0 and β1 = 0.8, β2 = 0.998, ϵ = 1e − 8. We used sentencepiece (Devlin et al., 2018) (Kudo, 2018) to generate a subword vocabulary of size 8K. Beam-width was set to 10 in the decoder phase. OpenNMT (Klein et al., 2017) was used for training due to its CTranslate utility 4for faster inference (brought down CPU inference time for 1 layer model to < 7ms from 25ms per query at single concurrency. For fine-tuning we set learning rate to 0.0001 and all the model parameters are updated during the training. ## 8.2 Latency/Accuracy Trade-Offs With Deeper Models While deeper models improve the accuracy, they also take more time in inference which is critical in live systems like search query spell correction. Table 3 summarizes the comparison of deeper models on time taken for inference on both GPU (NVIDIA Tesla V100-SXM2) and CPU(Intel x_86, 64 bit, 2.1GHz, VM with KVM Hypervisor) setups along with the corresponding model metrics. Please note that for NeuSpell, we have used the 4https://github.com/OpenNMT/CTranslate2 Clickback = \#clicks on ′*results f or originalquery*′ \#*T otal search requests* Coverage = \#*Spell system changed original query* \#*T otal searches*
rajakumar-kalarani-etal-2023-lets
{``}Let{'}s not Quote out of Context{''}: Unified Vision-Language Pretraining for Context Assisted Image Captioning
https://aclanthology.org/2023.acl-industry.67
Well-formed context aware image captions and tags in enterprise content such as marketing material are critical to ensure their brand presence and content recall. Manual creation and updates to ensure the same is non trivial given the scale and the tedium towards this task. We propose a new unified Vision-Language (VL) model based on the One For All (OFA) model, with a focus on context-assisted image captioning where the caption is generated based on both the image and its context. Our approach aims to overcome the context-independent (image and text are treated independently) nature of the existing approaches. We exploit context by pretraining our model with datasets of three tasks- news image captioning where the news article is the context, contextual visual entailment, and keyword extraction from the context. The second pretraining task is a new VL task, and we construct and release two datasets for the task with 1.1M and 2.2K data instances. Our system achieves state-of-the-art results with an improvement of up to 8.34 CIDEr score on the benchmark news image captioning datasets. To the best of our knowledge, ours is the first effort at incorporating contextual information in pretraining the models for the VL tasks.
# "Let'S Not Quote Out Of Context": Unified Vision-Language Pretraining For Context Assisted Image Captioning Abisek Rajakumar Kalarani and **Pushpak Bhattacharyya** Department of Computer Science and Engineering, IIT Bombay, India {abisekrk, pb}@cse.iitb.ac.in Niyati Chhaya and **Sumit Shekhar** ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) Adobe Research, India {nchhaya, sushekha}@adobe.com ## Abstract Well-formed context aware image captions and tags in enterprise content such as marketing material are critical to ensure their brand presence and content recall. Manual creation and updates to ensure the same is non trivial given the scale and the tedium towards this task. We propose a new unified Vision-Language (VL) model based on the One For All (OFA) model, with a focus on context-assisted image captioning where the caption is generated based on both the image and its context. Our approach aims to overcome the context-independent (image and text are treated independently) nature of the existing approaches. We exploit context by pretraining our model with datasets of three tasks- news image captioning where the news article is the context, contextual visual entailment, and keyword extraction from the context. The second pretraining task is a new VL task, and we construct and release two datasets for the task with 1.1M and 2.2K data instances. Our system achieves state-of-theart results with an improvement of up to 8.34 CIDEr score on the benchmark news image captioning datasets. To the best of our knowledge, ours is the first effort at incorporating contextual information in pretraining the models for the VL tasks. ## 1 Introduction Large enterprises have several teams to create their content for the purpose of marketing, campaigning, or even maintaining a brand presence. Multimodal assets, particularly images are an integral part of this. The scale and the speed at which one needs to create and update content, especially to ensure personalization requires several resources, which in turn acts as a hindrance to the success of the enterprise. Opportunistic updates are critical for success in the current competitive marketing and advertising scenario. Ensuring that every multimodal asset associated with any piece of enterprise content has Figure 1: An example image with its context. Text in blue and green can be inferred from the image and the context respectively. Text in red requires both the image and the context. an appropriate caption and tag is not possible given the scale. While apparently a nuanced aspect of any image, the caption serves as the key information carrier of what the image is all about, in turn ensuring the right recall (ability to find) for the content that contains this image. If an image has a well-formed caption, that captures the context accurately - it also makes the document accessible. Making enterprise content accessible is an important metric for large organizations as they strive to be inclusive. The scale and the quick turn-around time demanded for the content creation cycle results in the lack of correct tagging and captions of images (multimodal assets), in turn an eventual lost of revenue and a target customer base. We propose a method towards automated context-aware captioning of images targeted to reduce the tedium and this critical gap in the enterprise authoring and content creation process. Large-scale pretraining of language models (Devlin et al. 2019; Brown et al. 2020) has witnessed 695 great success in many downstream NLP tasks. This success has inspired multi modal pretraining for image-text, image-only, and video-text tasks. Currently, building unified models that jointly learn multiple vision-language tasks is gaining a lot of attention and has shown promising results on many VL tasks (Wang et al. 2022, Lu et al. 2020, Cho et al. 2021, Wang et al. 2021). The existing unified Vision-language models focus on tasks like image captioning (Stefanini et al., 2022), visual question answering (Wu et al., 2017), visual entailment (Xie et al., 2019), and image-text retrieval (Wang et al., 2019) that consider the image as a standalone entity. However, images are typically accompanied by text that adds additional meanings which are not utilized in these tasks as shown in Figure 1. Also, the same image can mean different things in different contexts. For example, a picture of a football player being emotional can mean they are celebrating a goal or are disappointed with their shot, depending on the context. Hence it is essential to consider the context of the image for understanding it completely. Traditional image captioning models do not use contextual information. In news image captioning (Biten et al., 2019), the generated caption contains information extracted from both the news article and the image. The news image captioning task is a special subtask of context assisted image captioning task that uses the news article as the contextual information about the image. In our work, the task names- news image captioning and context assisted image captioning are hence used interchangeably. Existing pretrained VL models lack the ability to use contextual information as the pretraining tasks do not contain long text associated with image-text pairs. We propose a new unified VL model based on the One For All (OFA) model, with a focus on using the contextual information associated with the image for real-world problems like news image captioning. As there are no existing VL classification task that uses contextual information, we introduce a new VL task called 'Contextual Visual Entailment'. Visual entailment (Xie et al., 2019) is a refined image-text matching task that checks for the entailment of the caption with the premise image. Visual entailment deals with only the descriptive characteristics of the image. In our contextual visual entailment task, both the image and the context of the image are treated as the premise, and the entailment of the caption is predicted with respect to both. Our contributions are: - A new unified VL model pretrained for keyword extraction, contextual visual entailment, and news image captioning with a focus on using contextual information which has not been explored before. - State-of-the-art results on the GoodNews and NYTimes800k datasets with an improvement of 8.34 CIDEr points on the GoodNews dataset. - A novel VL classification task where the context information surrounding the image is utilized for detecting the entailment of the caption with the image. - Release of two datasets1- a large synthetic dataset consisting of 1.1M Image-Caption pairs with context and a more challenging dataset with manually annotated negative samples consisting of 2.2K instances for the proposed contextual visual entailment task. ## 2 Related Work Image Captioning was initially conceived as a caption retrieval or template filling task. It involved matching the query image with a predefined set of captions or identifying the objects in the image to place them in predefined templates (Farhadi et al. 2010; Li et al. 2011; Kulkarni et al. 2013). The advancements made with deep learning based techniques in machine translation inspired the community to adopt similar techniques for image captioning where images were fed to the encoder and the decoder generated caption as a sequence of words (Farhadi et al. 2010; Li et al. 2011; Kulkarni et al. 2013). Attention allows decoder to focus on different parts of the input and hence it was incorporated to generate words focused on important regions of the image in both sequential models (Xu et al. 2015; Lu et al. 2017; Anderson et al. 2018; Huang et al. 2019) and transformer based models (Cornia et al., 2020). In recent years, the models are trained on huge datasets with several millions of image-text pairs for image captioning (Li et al. 2020; Su et al. 2020; Radford et al. 2021). However, in all these works images are treated as standalone entities and their context is not taken into account. News Image Captioning deals with the generation of captions for news images. The news articles 1Code and data are available at: https://github.com/ abisekrk/context-assisted-image-captioning contain the context of the image and they are taken into account during the caption generation process. Biten et al. (2019) propose an encoder-decoder model with attention over both image and news article encodings to generate news image captions. It generates captions with placeholders for named entities and fills those placeholders by choosing named entities from the news article. Chen and Zhuge (2019) model it as a query-based summarization problem where the news image acts as the query and the news article is the source text to be summarized. Tran et al. (2020) use transformers with separate encoders for extracting image features, object features and faces present in the image. The decoder receives the input from all three encoders to generate caption. Liu et al. (2021) use a visual selective layer that learns to align the image features with the text in the news article to generate captions. Yang et al. (2021a) discuss the journalistic guidelines followed while writing news image captions in journals and incorporate them in the generation process. Zhang et al. (2022) use prompt tuning to finetune pretrained models for news image captioning. ## Unified Vision-Language (Vl) Modeling Is a new paradigm that involves creating a unified framework for multiple vision-language tasks, allowing models to be trained on a range of datasets constructed for a range of tasks. ViLBERT (Lu et al., 2019) extends the BERT (Devlin et al., 2019) architecture to work with visual inputs. Lu et al. (2020) propose a multi-task training approach with 12 VL datasets on 4 broad tasks. VL-T5 (Cho et al., 2021) combines multiple VL tasks as text generation tasks using pretrained models for image features. UniT (Hu and Singh, 2021) unifies cross-modal tasks by using a modality specific encoder and a shared decoder. UFO (Wang et al., 2021) proposes to use the same transformer architecture as the encoder for both image and text in VL tasks. UniTAB (Yang et al., 2021b) supports VL tasks with bounding boxes by encoding the text and box output sequences to shared token sequences. OFA (Wang et al., 2022) abstracts all VL tasks into sequence-to-sequence problems. Existing unified VL models do not consider the context of the image in their pretraining tasks. Our unified model is pretrained with tasks that include contextual information and hence it achieves state-of-the-art results on news image captioning datasets. ## 3 Dataset We pretrain our model on a large pretraining dataset and evaluate its performance on benchmark datasets for news image captioning. ## 3.1 Pretraining Datasets We use Visual News (Liu et al., 2021) and KPTimes (Gallina et al., 2019) datasets for constructing our pretraining datasets. Visual News dataset was compiled by collecting news articles from four news agencies: The Guardian, BBC, USA Today, and The Washington Post. It only includes the articles with high resolution images and where the caption length is between 5 and 31 words. It is diverse with differences in properties like average caption length, article length, and distribution of named entities across news agencies. KPTimes dataset was constructed by crawling over 0.5 million news articles, mainly from New York Times. The metadata associated with field- "news_keywords" and "keywords" form the gold standard keyphrases . The three pretraining datasets are constructed from these datasets. News Image Captioning: We removed duplicate captions, and news articles without images, and captions from Visual News dataset. The cleaned dataset consists of 11, 97, 000 data instances with image, news article, and the caption. These are split as 11, 17, 697 for training, 40, 000 for validation, and 40, 000 for test sets respectively. Contextual Visual Entailment is a binary classification problem, so it is required to construct both positive and negative pairing of the image, and caption with the context. For the data instances where the caption entails the image and the context, the original image, the caption, and the context from the training split of the above news image captioning dataset are used (P). We use the following operations to generate the inconsistent pairs in our dataset: 1. Choose a random caption different from the correct caption **(N-I)**. 2. Replace the named entities in the correct caption with named entities from randomly chosen caption **(N-II)**. For example, the caption 'John Garrison performing in Berlin, April 2015' will be changed to *'Mark Pattinson performing in* London, April 2015'. 3. Keep the named entities of the original caption intact but replace the remaining content with a ![3_image_0.png](3_image_0.png) random caption that has the same type and the same number of named entities **(N-III)**. For example, the caption 'John Garrison performing in Berlin, April 2015' will be changed to *'John* Garrison waiting in queue for filing tax returns in Berlin, April 2015'. Named entity recognition is done with SpaCy (Honnibal and Montani, 2017) in our experiments. SpaCy allows the detection of 18 different named entities. We only use the named entities labeled as *'PERSON', 'FAC', 'ORG', 'GPE', 'LOC',* and 'EVENT' that represent a person, building/airport, organization, geopolitical entities, location, and event respectively, as they occur more frequently. The N-I class of negative captions will have different information and different named entities from the original caption. The N-II class will have same information as the original caption but will contain different named entities. The N-III class of captions will have same named entities but will convey different information. The final dataset has 1005925, 55884, and 55884 instances in the train, validation and test split respectively. We also create a separate manually annotated challenging dataset for evaluation. In addition to synthetically creating a dataset for pretraining, we create and release a manually annotated challenging dataset for the task of contextual visual entailment consisting of 2.2K data instances. The negative captions in this dataset are created manually by changing a word or a small phrase from the original caption, such that its meaning changes significantly without much difference in the sentence structure. For example, *'Supporters marched peacefully during the protest'* will be changed to 'Supporters marched violently during the protest'. The negative examples created in these ways will ensure that the models need to learn the relationship between image, caption, and context to identify the entailment correctly. This is used to test the model's knowledge of image-caption entailment at a more finer level. Annotation Details: The annotations were performed by two annotators proficient in English. One is a master's student and the other is a bachelor's student in Computer Science and Engineering. They were provided with examples of negative captions before annotation. The image links, caption, and context were shared in Google Sheets for annotation. The annotators were only asked to select a word or phrase from the caption and replace it with a new word or phrase. The modified captions were exchanged and verified by each other. ## 3.1.1 Keyword Extraction The dataset for the keyword extraction task is constructed from KPTimes dataset after removing duplicate news articles. The news article forms the input to the system and the sequence of keywords form the output. The final dataset has 259902, 10000, and 10000 data instances in the train, validation and test split respectively. The training data from these three datasets are combined to generate the pretraining dataset with 2.3M data instances. | Dataset | Model | B-4 | MET. | ROUGE | CIDEr | Named Entities P R | | |--------------------|--------------------|-------|--------|---------|---------|----------------------|-------| | GoodNews | 1.86 | 13.75 | 20.46 | 17.57 | 8.23 | 6.06 | | | Transform and Tell | 6.05 | 10.30 | 21.40 | 54.30 | 22.20 | 18.70 | | | Visual News | 6.10 | 8.30 | 21.60 | 55.40 | 22.90 | 19.30 | | | JoGANIC | 6.83 | 11.25 | 23.05 | 61.22 | 26.87 | 22.05 | | | NewsMEP | 8.30 | 12.23 | 23.17 | 63.99 | 23.43 | 23.24 | | | OFA | 6.41 | 10.63 | 23.59 | 67.19 | 23.06 | 19.04 | | | Ours | 7.14 | 11.21 | 24.30 | 72.33 | 24.37 | 20.09 | | | GoodNews | Transform and Tell | 6.30 | 10.30 | 21.70 | 54.40 | 24.60 | 22.20 | | Visual News | 6.40 | 8.10 | 21.90 | 56.10 | 24.80 | 22.30 | | | JoGANIC | 6.79 | 10.93 | 22.80 | 59.42 | 28.63 | 24.49 | | | NewsMEP | 9.57 | 13.02 | 23.62 | 65.85 | 26.61 | 28.57 | | | OFA | 6.91 | 10.77 | 22.70 | 61.81 | 27.14 | 22.51 | | | Ours | 7.54 | 11.27 | 23.28 | 66.41 | 28.11 | 23.25 | | | NYTimes800K | | | | | | | | ## 3.2 Benchmark Datasets The performance of models trained on the pretraining datasets is evaluated on two benchmark datasets- GoodNews (Biten et al., 2019) and NYTimes800K (Tran et al., 2020). We follow the train, validation, and test splits from the original work for both datasets. The GoodNews dataset has 424, 000, 18, 000, and 23, 000 in training, validation, and test split respectively. The NYTimes800K dataset has 763, 000 training, 8000 validation, and 22, 000 test instances in the dataset. ## 4 Our Model Unified Vision-Language (VL) modeling has shown great promise in multiple VL tasks. Hence, we use a unified model for all three tasks- context assisted image captioning, contextual visual entailment, and keyword extraction. Figure 2 shows an overview of our unified VL pretraining strategy. We use the OFA*Large* (Wang et al., 2022) architecture. OFA is a task and modality agnostic model that unifies all vision-language, vision-only, and language-only tasks using a sequence-to-sequence learning framework. We use ResNet152 (He et al., 2016) and VQGAN (Esser et al., 2020) to obtain visual tokens for the given image. The text (context and caption) is tokenized by byte-pair encoding (BPE). A single unified vocabulary is used for both visual and linguistic tokens. Transformers are used as encoders and decoders and all vision-language tasks are abstracted to seq-to-seq conversion tasks with specific instructions created for each task, similar to the OFA pretraining. The pretraining of our model involves three tasks- News image captioning, contextual visual entailment, and keyword generation. For news image captioning, we convert the image, caption, and context into a sequence of input tokens and generate the caption as a sequence of tokens conditioned on these input tokens. For keyword generation, the news article is tokenized as the input sequence and the keywords are generated by the model as the output sequence. Contextual visual entailment is a classification task, so the input sequence to the model is the image, caption, and context tokens and the model is trained to generate 'Yes' or 'No' as the output indicating if the caption entails the image and context or not respectively. The model is trained to reduce the cross entropy loss. For the input sequence x consisting of visual and text tokens and output y, the loss is given as: $${\mathcal{L}}=\sum_{i}^{|y|}l o g P_{\theta}(y_{i}|y_{<i},x)$$ where yiis the text token to be predicted and yi−1, yi−2 are tokens predicted so far. ## 5 Experiments The experimental details for pretraining and finetuning for context assisted image captioning are discussed here. ## 5.1 Pretraining We used OFA*Large* architecture for pretraining our model. The model has 472M parameters with 12 | Model | BLEU-4 | METEOR | ROUGE | CIDEr | |-------------------------------------------------|----------|----------|---------|---------| | BLIP-2 + GPT-3 | 2.06 | 8.48 | 13.22 | 17.12 | | OFA | 6.41 | 10.63 | 23.59 | 67.19 | | OFA + Captioning + Contextual Visual Entailment | 6.90 | 11.01 | 23.83 | 69.97 | | OFA + Captioning + Keyword Extraction | 6.69 | 10.81 | 23.44 | 67.83 | | OFA + Captioning | 6.85 | 10.90 | 23.70 | 68.93 | | Our Model (Without Context) | 2.24 | 5.34 | 14.45 | 18.04 | | Our Model + NE | 6.95 | 11.06 | 23.98 | 70.03 | | Our Model | 7.14 | 11.21 | 24.30 | 72.33 | Table 2: Ablation study results on the GoodNews dataset. NE denotes fine-tuning done with named entities extracted separately. 'OFA + X' denotes pretraining of OFA done with X task. 'Our Model' refers to the model pretrained on the three tasks with context information. encoder and 12 decoder layers. The weights were initialized with the publicly available OFA*Large* checkpoint to retain the knowledge from other VL tasks. The model was pretrained on the 2.3M data instances from the pretraining datasets. All 3 tasks were abstracted into sequence-tosequence task. For the instances of news image captioning dataset, the instruction was "*What does* the image describe based on the text <context> ?", where <context> holds the tokens from the news article. For contextual visual entailment, the instruction was "*Is the text <caption> consistent with* the image and the text <context> ?", where <caption> and <context> contain the text tokens from the caption and the context. For the keyword extraction task, the instruction given was "What are the keywords in the article <context>?". ## 5.2 Context Assisted Image Captioning Our unified model pretrained model for the three tasks was finetuned on GoodNews and NYTimes800K datasets for the task of context assisted image captioning. The image resolution was fixed at 384∗384 and the news article was clipped to 512 tokens. The maximum caption length was fixed at 30. We use a batch size of 8 for training. We train the model with early stopping and choose the model that achieves the best CIDEr score on the validation set. The best-performing model is then tested on the unseen test data and the results are summarized in Table 1. ## 5.3 Training Details The experiments were done with the OFA*Large* architecture. For both pretraining and finetuning, the image resolution was fixed at 384 ∗ 384. The input token length was restricted to 512 tokens while the output was restricted to 30 tokens. The dropout ratio was set to 0.1. We used Adam optimizer (Kingma and Ba, 2014) with 0.9 and 0.999 as the β values with ϵ = 1e − 08 and warm-up ratio was set as 0.06. We used an initial learning rate of 1e − 5 with polynomial decay. We used a beam size of 10 during the test inference with temperature 0.98. We also used mixed precision training to speed up the training process. ## 5.4 Frozen Image Encoder + Frozen Llm LLMs and large-scale pretrained VL models have shown great zero-shot performance in many downstream applications. We use BLIP-2 (Li et al., 2023) for getting zero-shot image captions for the GoodNews dataset. These captions are generated without contextual information and are descriptive in nature. These captions are passed to a LLM along with contextual information to generate context assisted image captions. We use text-davinci-003 model in the GPT-3 family (Ouyang et al., 2022). The prompt for generating the caption was "Add contextual information to the caption. Caption: <sample caption> Context: <sample context>". We randomly sampled a caption, and context pair from the training dataset of the GoodNews dataset and used it as an example in the prompt. The contextual captions are predicted for caption and context pair in the test set. The results are discussed in Table 2. ## 5.5 Ablation Study In order to analayse the importance of the three pretraining tasks we used, we pretrained the OFA model using three different subsets of the pretraining tasks. We pretrained the model with only "Captioning and Contextual Visual Entailment" tasks, with only "Captioning and Keyword Extraction" task and with only "Captioning" task and compared their performance with the model trained on all the three tasks. Previous works in news image captioning (Liu et al. 2021; Yang et al. 2021a; Zhang et al. 2022) have shown that extracting named entities from the context and feeding them to the decoder helps generate correct named entities in the caption. Hence, we also try injecting named entities into the prompt while finetuning the model. We use SpaCy for identifying and extracting named entities. We update the prompt as "what does the image describe about the names <named entities> based on the text <context>?" during finetuning. We clipped the named entity tokens to 64 and restricted the context to 512 tokens as done in previous experiments. We also perform experiments without using the contextual information with our pretrained model in the traditional image captioning setting, to analyze the usefulness of the contextual information. The results are summarized in Table 2. ## 6 Results And Analysis Our model achieves state-of-the-art results on both GoodNews and NYTimes800K datasets. The OFA model finetuned on benchmark datasets also shows good performance. This shows the ability of OFA to adapt to new tasks and the correctness of our instructions for finetuning. However, it can be seen that due to the lack context information in the pretraining tasks used in OFA, the model doesn't produce substantially better results compared to the current SOTA models. Our model pretrained on the three tasks shows a 5.14 CIDEr score improvement over the OFA model on the GoodNews dataset which is an 8.34 CIDEr score improvement over the current SOTA model. The model also achieves a SOTA result of 66.41 CIDEr score on the NYTimes800K dataset. The average length of news articles in GoodNews and NYTimes800K dataset are 451 and 974 words respectively. The larger article length in NYTimes800K dataset is the reason for the CIDEr scores being closer to the current SOTA as the context length in our experiments is restricted to 512 tokens. We also obtain comparable performance in precision and recall of named entity generation despite not feeding the named entities directly to the model like in the previous works. The BLIP-2 model has shown great promise in zero-shot image caption generation. We use BLIP2 to generate descriptive captions and feed those captions as input to the GPT-3 model along with the context to generate the final context assisted caption. The BLIP-2 + GPT-3 model generates fluent captions but it does not contain the relevant information based on the image features as indicated by the poor performance on evaluation metrics in Table 2. This indicates that it is essential to train with both image and context together. Our pretraining tasks indirectly direct the model to capture named entity information from the context. However, earlier works on news image captioning show that extracting named entities and feeding them directly to the model can help it generate better captions with correct named entities. Our pretrained model showed a slight decrease in performance when named entity information is presented to it in the prompt. This is because the named entity tokens take up valuable space in the 512 tokens allowed for the context, leading to information loss gained from the context. Also, since only 64 tokens are allowed for named entities, not all the important entities in the news article are presented in the prompt and it disadvantages the model in both ways. We also pretrained the OFA model with a subset of the three pretraining tasks to identify the importance of each task in pretraining. The models pretrained with a combination of two tasks and with only the captioning task performed poorly compared to the model pretrained on the three tasks. This shows the importance of training with all three tasks. Between the two two-task pretrained models, the model that used contextual visual entail task performed better, indicating the usefulness of the task we introduced. ## 7 Summary And Conclusion In this work, we proposed a new unified VL model that uses contextual information of images that has not been utilized in pretraining before. We introduce a new VL classification task called contextual visual entailment and pretrain a model with three subtasks that uses long text along with image and caption. Our model achieves new state-ofthe-results on benchmark datasets for news image captioning and highlights the importance of using contextual information in pretraining. In the future, we aim to deploy our model to allow context-aware caption generation which could be used in enterprise authoring and many other content creation processes. ## References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In *2018* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Ali Furkan Biten, Lluís Gómez, Marçal Rusiñol, and Dimosthenis Karatzas. 2019. Good news, everyone! context driven entity-aware captioning for news images. *2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pages 12458– 12467. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Jingqiang Chen and Hai Zhuge. 2019. News image captioning based on text summarization using image as query. In *2019 15th International Conference* on Semantics, Knowledge and Grids (SKG), pages 123–126. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In *ECCV*. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. *ArXiv*, abs/2102.02779. Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. 2020. Meshed-memory transformer for image captioning. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. *ArXiv*, abs/1810.04805. Patrick Esser, Robin Rombach, and Björn Ommer. 2020. Taming transformers for high-resolution image synthesis. *2021 IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pages 12868– 12878. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, J. Hockenmaier, and David Alexander Forsyth. 2010. Every picture tells a story: Generating sentences from images. In *ECCV*. Ygor Gallina, Florian Boudin, and Béatrice Daille. 2019. Kptimes: A large-scale dataset for keyphrase generation on news documents. In *Proceedings of the* 12th International Conference on Natural Language Generation, pages 130–135. Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Ronghang Hu and Amanpreet Singh. 2021. Unit: Multimodal multitask learning with a unified transformer. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1419–1429. Lun Huang, Wenmin Wang, Jie Chen, and Xiao-Yong Wei. 2019. Attention on attention for image captioning. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 4633–4642. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *CoRR*, abs/1412.6980. Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2013. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2891–2903. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. *ArXiv*, abs/2301.12597. Siming Li, Girish Kulkarni, Tamara L Berg, Alexander C Berg, and Yejin Choi. 2011. Composing simple image descriptions using web-scale n-grams. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 220–228, Portland, Oregon, USA. Association for Computational Linguistics. Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV. Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. 2021. Visual news: Benchmark and challenges in news image captioning. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6761–6771, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. J. Lu, C. Xiong, D. Parikh, and R. Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In *2017 IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), pages 3242–3250, Los Alamitos, CA, USA. IEEE Computer Society. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Neural Information Processing Systems. Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. *2020* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10434–10443. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. *ArXiv*, abs/2203.02155. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In *ICML*. Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Silvia Cascianelli, Giuseppe Fiameni, and Rita Cucchiara. 2022. From show to tell: A survey on deep learning-based image captioning. *IEEE transactions* on pattern analysis and machine intelligence, PP. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre-training of generic visual-linguistic representations. *ArXiv*, abs/1908.08530. Alasdair Tran, A. Mathews, and Lexing Xie. 2020. Transform and tell: Entity-aware news image captioning. *2020 IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pages 13032– 13042. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *ArXiv*, abs/1706.03762. Jianfeng Wang, Xiaowei Hu, Zhe Gan, Zhengyuan Yang, Xiyang Dai, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2021. Ufo: A unified transformer for vision-language representation learning. *ArXiv*, abs/2111.10023. Liwei Wang, Yin Li, Jing Huang, and Svetlana Lazebnik. 2019. Learning two-branch neural networks for image-text matching tasks. *IEEE Transactions on* Pattern Analysis and Machine Intelligence, 41:394– 407. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. Unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In *ICML*. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony R. Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. *Comput. Vis. Image Underst.*, 163:21–40. Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. *ArXiv*, abs/1901.06706. Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, page 2048–2057. JMLR.org. Xuewen Yang, Svebor Karaman, Joel Tetreault, and Alejandro Jaimes. 2021a. Journalistic guidelines aware news image captioning. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5162–5175, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2021b. Crossing the format boundary of text and boxes: Towards unified vision-language modeling. *ArXiv*, abs/2111.12085. Jingjing Zhang, Shancheng Fang, Zhendong Mao, Zhiwei Zhang, and Yongdong Zhang. 2022. Fine-tuning with multi-modal entity prompts for news image captioning. In *Proceedings of the 30th ACM International Conference on Multimedia*, MM '22, page 4365–4373, New York, NY, USA. Association for Computing Machinery. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In *The Thirty-Fourth AAAI Conference on* Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 13041–13049. AAAI Press. ## A Appendix A.1 Dataset Details Table 3 provides the summary of the three datasets used for pretraining. | Train | Val | Test | | | |-----------------------|------------------------------|---------|-------|-------| | Keyword Extraction | 259902 | 10000 | 10000 | | | Pretraining | Contextual Visual Entailment | 1005925 | 55884 | 55884 | | News Image Captioning | 1117697 | 40000 | 40000 | | | Benchmark | GoodNews | 424000 | 18000 | 23000 | | NyTimes800K | 763000 | 8000 | 22000 | | Table 3: Summary of the statistics of the datasets used for pretraining and benchmarking. Model **Overall** Acc. Pre. **Rec.** F1 w/o context 1) CLIP + FNN 61.30 61.61 60.00 60.79 2) CLIP + Transformer 60.00 59.83 60.87 60.34 3) Ours 66.96 69.70 60.00 64.49 w/o image 4) CLIP + FNN 59.13 61.54 48.70 54.37 5) CLIP + Transformer 56.96 56.25 62.61 59.26 6) Ours 65.22 66.67 60.87 63.64 w/ context 7) CLIP + FNN 64.35 63.64 66.96 65.25 8) CLIP + Transformer 65.65 63.64 **73.04** 68.02 9) Ours 73.04 **79.78** 61.74 **69.61** ## A.2 Additional Experiments Our pretrained model achieves state of the results on news image captioning task. In addition, it performs very well on the other two pretraining tasks. The contextual visual entailment is a new task introuduced by our work and hence we propose baselines for comparing the results of our model. We compare our model's performance on keyword extraction against standard works. ## A.2.1 Contextual Visual Entailment We propose a two baselines for contextual visual entailment, where the image and text features are extracted from pretrained networks. The features are obtained from a pretrained CLIP (Contrastive Language–Image Pre-training) model. CLIP (Radford et al., 2021) was trained on large scale imagetext corpus to minimize contrastive loss such that the text embedding and the image embedding will have higher cosine similarity if the text describes the image perfectly and low when the text incorrectly describes the image. ## Clip And Fnn Model In our CLIP embedding based models, the representation for image, caption, and context is obtained from a pretrained CLIP model. The CLIP and FNN model, uses a simple early fusion strategy in which the image, caption, and context embeddings from CLIP are concatenated and fused with feed-forward neural networks. The three input embeddings are concatenated and passed to a two-layer feedforward neural network for combining the information. An output layer predicts the entailment label. The experiments are repeated without the context information and then again without the image features as input to study their impact on entailment detection. ## Clip And Transformer Model The fusion of image-text information using transformers has helped achieve good performance on many standard vision-language tasks (Zhou et al. 2020; Chen et al. 2020; Wang et al. 2022 ). The CLIP and Transformer model uses transformer Vaswani et al. (2017) encoders with multi-head attention to combine these multimodal information. A transformer layer receives the input features from the image, caption, and context and generates the combined representation for the information. The outputs from the transformer layers are pooled to a single dense layer followed by a classification layer to perform the final binary classification. We summarize the results of our experiments | Model | F@10 | |------------------|--------| | FirstPhrases | 9.2 | | MultipartiteRank | 11.2 | | CopySci | 11 | | CopyNews | 39.3 | | Ours | 40.6 | on the manually annotated contextual visual entailment data in Table 4 and compare it with the results produced by our pretrained model. ## A.2.2 Keyword Extraction We use our pretrained model to perform keyword extraction as a sequence to sequence task where the output is the set of keyword tokens. We use similar hyperparameters to caption generation for keyword generation. We finetune the pretrained model for 5 epochs and report the results on KPTimes dataset in Table 5. ## A.3 Examples Figure 3 shows examples from the GoodNews dataset and compares the caption generated by each model. ![11_image_0.png](11_image_0.png) | Context | Captions | |-------------------------------------------------------|-----------------------------------| | Fears that a fire had erupted inside the Statue of | True Caption : A fire boat | | Liberty on Wednesday led several hundred | made a trip to the Statue of | | visitors to flee its crown observation deck | Liberty on Wednesday after an | | pedestal and base officials said. In the end, it | overheated elevator motor set | | was a trickle of smoke emanating from an | off the fire alarm system | | overheated elevator motor that set off the | | | statues fire alarm system. This was the second | BLIP-2 : a red and white tug | | such problem in recent days a spokesman for | boat in the middle of a body of | | the Fire Department said... The alarms caused | water | | people in the statue, part of a throng of visitors. | BLIP-2 + GPT-3: The tug | | to Liberty Island that swells to 15000 a day in | | | the summer to descend stairways to get out, | boat was sent to the island to | | Ms Rambo said. For visitors who were in the | help evacuate the visitors. | | crown that meant a scramble down 354 steps. | Our model : A boat escorted | | visitors to the Statue of Liberty | | | on Wednesday after a fire | | | broke out inside the statue | | | "The Tamils will go through the motions to try to | True Caption : A police officer | | use the council system but they will be frustrated | stood guard as voters left a | | because the system doesnt work", said Paikiasothy | polling station in Jaffna | | Saravanamuttu executive director of the Centre for | | | Policy Alternatives in Colombo. So the Tamils will | BLIP-2: a man holding a gun | | ask for reforms and the government will be very | in front of a group of people | | stupid if they frustrate them. The elections were | | | hardfought and a variety of apparent dirty tactics | BLIP-2 + GPT-3 : The man | | were used to try to defeat the Tamil alliance…A | holding a gun was a member of | | fake version of a respected newspaper in Jaffna | the Sri Lankan military, sent to | | Uthayan was also distributed…The fake | intimidate Tamil candidates | | newspaper may explain why turnout in the first | | | hours of the election was relatively light in Jaffna | and voters in the country's first | | post-war | | | with many polling places reporting only a trickle of | | | voters… The army confiscated more than 6000 | | | acres. Now that the war is over however the | Our Model : A Sri Lankan | | government has decided to develop that land | soldier guarded a polling | | rather than return it to its owners.. | station in Jaffna on Thursday | | WASHINGTON Concepcion Picciotto held a vigil | True Caption : Ms Picciotto 77 | | for 32 years across the street from the White | holds her vigil in Lafayette | | House protesting war and nuclear proliferation. Her | Square. | | small encampment never left the edge of Lafayette | | | Square, her tent surrounded by signs with slogans | BLIP-2 : a woman wearing a | | like, "Read My Lips No New Wars". "I've been | head scarf with pins on it | | beaten up arrested many times and gassed", said | | | Ms Picciotto, who is 77 and maybe five feet tall her | BLIP-2 + GPT-3: Concepcion | | sing the | Picciotto was a passionate | | United States Park Police confiscated Ms | activist who had been | | Picciottos tent and signs when she left for the night | protesting for 32 years, | | leaving a man in her place They left only a pale | | | footprint on the red brick sidewalk Office workers | wearing a head scarf with pins | | on it as a symbol of | | | passing by were caught off guard by the vigils | | | conspicuous absencelts been there since Ive been | Our Model : Concepcion | | here even longer John Tomasetti 26 said on | | | Thursday as he walked back to his office carrying | Picciotto 77 at a vigil in | | lunch. | Lafayette Square on Thursday | Figure 3: Some examples from our dataset along with the captions generated by each model.
kwon-etal-2023-ground
What, When, and How to Ground: Designing User Persona-Aware Conversational Agents for Engaging Dialogue
https://aclanthology.org/2023.acl-industry.68
This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns. The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets to address the challenges of WWH in personalized, open-domain dialogue systems. Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses. The combination of these methods leads to more fluent conversations, as evidenced by subjective human evaluations as well as objective evaluations.
# What, When, And How **To Ground: Designing User Persona-Aware** Conversational Agents For Engaging Dialogue Deuksin Kwon1 Sunwoo Lee1 Ki Hyun Kim1 Seojin Lee1 Taeyoon Kim1 **Eric Davis**1 1 SK Telecom, South Korea {ds.kwon, sunwoo.lois, kimkihyun, skt.kaylee, tae.y.kim, eric.davis}@sk.com ## Abstract This paper presents a method for building a personalized open-domain dialogue system to address the WWH (WHAT, WHEN, and HOW) problem for natural response generation in a commercial setting, where personalized dialogue responses are heavily interleaved with casual response turns. The proposed approach involves weighted dataset blending, negative persona information augmentation methods, and the design of personalized conversation datasets to address the challenges of WWH in personalized, open-domain dialogue systems. Our work effectively balances dialogue fluency and tendency to ground, while also introducing a response-type label to improve the controllability and explainability of the grounded responses. The combination of these methods leads to more fluent conversations, as evidenced by subjective human evaluations as well as objective evaluations. ## 1 Introduction A personalized dialogue (PD) system is capable of generating user-customized responses based on long-term memory about the user's persona, leading to more trustworthy and engaging conversations (Ranjbartabar et al., 2021; Xu et al., 2022). In our study, persona attributes cover comprehensive user-related information, such as personality, behaviors, preferences, and experience. The key to enhanced user engagement in a PD system lies in finding a persona that is contextually relevant and appropriate, on which a model is grounded to generate a natural response. However, as shown in the example in Figure 1, PD systems usually need to select relevant persona attributes from a given subset of N persona attributes, which is usually provided by external memory or retrieved from the user persona pool. Considering the fact that the agent's response is usually annotated with associated oracle persona information ![0_image_0.png](0_image_0.png) in the training dataset, deciding what persona attribute to select in each turn during model inference is a non-trivial problem. (We will refer to this problem as the "WHAT to ground" problem hereafter.) Another aspect to consider in a PD system is that under certain dialogue contexts, it is better not to generate a personalized response given retrieved persona attributes in order to create a more natural interaction (the second response in Figure 1). As shown in the example, it is usually difficult for the retrieval module to determine whether to use persona information for response generation. We also need a model to decide when to ground persona information with a given persona subset for every turn (We will call this the "WHEN to ground" problem hereafter). Given such a challenge, designing a user-based persona-aware PD system capable of generating engaging and human-like personalized responses requires addressing the "WHAT," "WHEN," and "HOW" (WWH) questions: 1) What personal information should be grounded given the conversation context, 2) When to generate responses using personal information, and 3) How to make natural and human-like personalized response. 1 707 ![1_image_0.png](1_image_0.png) Most previous research on personalized dialogue systems has focused on generating natural responses in ideal personalized conversation settings (Liu et al., 2020; Dong et al., 2022; Xu et al., 2022; Fu et al., 2022), where issues related to heavily interleaving personalized responses with casual dialogue turns are not considered. However, we believe that these are significant problems that need to be addressed in real-world personalized conversational systems. Large-scale Language Models (LLMs) such as GPT-3 have shown outstanding capabilities in various Natural Language Understanding (NLU) tasks and especially, in-context learning (Brown et al., 2020). However, the inherent abilities of LLMs alone are insufficient to effectively address the WWH problems in real-world service environments. Moreover, it is very tricky to generate natural and engaging personalized responses and sophisticatedly control the output of the model in multi-turn/session scenarios, relying solely on prompt engineering. In addressing the research gap and real-world challenges, we propose a method that controls the inclination of models to generate personalized responses. Our technique blends persona-augmented datasets to construct a personalized dialogue system, thus enabling human-like natural conversations. Our approach involves the following steps: 1) We create a Multi-Session Personalized Conversation (MSPC) dataset. This trains the model to ground the provided persona information effectively for a personalized response. 2) We control the model's persona-grounding level by adjusting the blending weights of the conversational datasets. Furthermore, we enrich the dataset with negative samples of persona subsets at the turn level for model fine-tuning. 3) To enhance both generation quality and the controllability and interpretability of persona-grounded generation, we use a turn label. This label indicates whether a turn is personalized or casual and serves as one of the inputs. Ultimately, we build a personalized dialogue system by fine-tuning an 18-billion parameter large language model (LLM). This LLM has a high level of understanding of conversation history, the ability to generate high-quality responses, and the capacity to focus effectively on given inputs, including users' personas. We also propose four grounding type categorizations to allow for analysis of the model's grounding patterns and detailed performance in subjective evaluation using *sensibleness* and *specificity*, which complements the objective evaluation based on *groundedness*, and *fluency*. ## 2 Related Work Since the release of the **PersonaChat** dataset (Zhang et al., 2018), methods to generate personalized responses that are consistent with or grounded on a persona have been extensively studied (Lee et al., 2021; Liu et al., 2020; Xu et al., 2021, 2022). Most of the studies focus on addressing the *WHAT* and HOW challenges by the use of diverse model architectures, modules, and training frameworks (Fu et al., 2022; Dong et al., 2022). With respect to the How, Liu et al. (2020) propose an RL-based approach for generating personalized dialogue with rewards for mutual persona perception. Wu et al. (2019) and Fu et al. (2022) employ variational methods to generate personalized and knowledgeable response generation. Song ![2_image_1.png](2_image_1.png) User Persona Attributes ![2_image_0.png](2_image_0.png) et al. (2019) addresses both *WHAT* and HOW by generating persona-grounded responses via CVAE with a selected persona from a memory. Xu et al. (2022) and Bae et al. (2022) also tackle the same problems via a persona retrieval module and a generator module. However, to the best of our knowledge, there has been no work on addressing all three WWH questions in PD system. Therefore, considering the crucial importance of addressing the WWH issues in a commercial system, we propose novel methods to tackle all three WWH questions, which are mission-critical for a commercial system. ## 3 Dataset To develop a PD system that addresses the WWH problems, we construct a Korean Multi-Session Personalized Dialogue dataset, which we refer to as MSPD. This dataset includes an agent that performs several unique roles, setting it apart from other PD datasets. Primarily, the agent is required to remember user persona attributes, including any persona attributes introduced during the conversation. The agent must also produce personalized responses that are both reasonably and timely grounded on the persona. The goal of this dataset is to enable a model to learn the HOW and *WHEN* of grounding. On average, the dataset contains 4 sessions per episode, with each session consisting of 10-12 turns between the user and the agent. This format allows the agent to learn how to sustain a natural conversation flow, both within and between sessions. As illustrated in the red and blue text in Figure 3, we annotate the persona used in responses and user utterances that include personal information the model should remember. Specifically, to address the *WHEN* and HOW problems from a dataset perspective, we cap the number of personalized responses per session at two or fewer, and performed rigorous reviews to ensure the quality and appropriateness of personalized responses within conversations. This approach allows us to construct 13,469 episodes in total. Statistics and other samples from the MSPD dataset can be found in Appendices A and B. Alongside the MSPD, we incorporate a variety of informal dialogue datasets, referred to as D*casual*, to train a more balanced model capable of generating high-quality daily, knowledgebased, empathetic, and personalized conversations. D*casual* consists of a comprehensive collection of approximately 12.5 million utterances. We use carefully-curated Korean dialogue datasets available online1, developed by National Information Society Agency (NIA), as well as crowdsourced conversational datasets, including Korean versions of PersonaChat, **EmpatheticDialogues**, and **Wizard of Wikipedia** (Dinan et al., 2018; Rashkin et al., 2018; Zhang et al., 2018). ## 4 Methodology As shown in Figure 2, we train our model during the training phase to address the *WHAT* and *WHEN* questions using a variety of methods. These include 1https://aihub.or.kr/ different types of negative persona augmentation, dataset blending, and response type generation. During the inference phase, given the dialogue context and a subset of persona attributes, the model is capable of generating suitable personalized responses. These persona subsets are retrieved from individual persona attributes determined by the context of the conversation. Additionally, the model provides an explanation for its decision through response type labels (RTL). Conditioning on the RTL, allows us to explicitly control the generation of a personalized response. ## 4.1 Persona-Grounded Generation In this study, every input of the training dataset consists of user demographic information d (e.g. gender, age), a subset of user persona ρ m, which consists of persona attributes, and dialogue context c m = [u1, a1, u2, a2, · · · , um−1, am−1, um]. u and a refer to the user and agent, respectively, and the target response y m = [y m 1 , · · · , ym ℓ ] is indexed to the mth agent response am. Given the input, which is in the format of (d, ρm, cm), we optimize the model via the conditional probability for personalized response y m and a loss function with Negative Log-Likelihood (NLL) loss that can be formulated as: $$P(y^{m}|d,p^{m},c^{m})=\prod_{t=1}^{\ell}P(y_{t}^{m}|d,\rho^{m},c^{m},y_{<t}^{m})\tag{1}$$ $$\mathcal{L}_{NLL}=-\sum_{t=1}^{\ell}\log P(y_{t}^{m}|d,\rho^{m},c^{m},y_{<t}^{m})\tag{2}$$ where $\ell$ is the length of the target response. ## 4.2 Dataset Blending Blending a variety of conversational datasets has been shown to improve the diversity, empathy, and knowledge of a dialogue system, leading to more natural and engaging conversations (Smith et al., 2020). By blending the MSPD, which is tailored for personalized conversations, with various types of casual dialogue datasets, D*casual*, the model becomes more balanced and adept at cohesive and natural conversations. We define a data instance as (c, r) where c and r are the dialogue context and target response, respectively as described in section 4.1. We blend datasets by instance according to blending weights for each dataset. In particular, in order to finely control the WWH problems with the blending weights (w), the MSPD dataset is divided into the agent's personalized responses (DMSPD-PR) and non-personalized responses (DMSPD-NPR) (e.g., agent's red and black colored responses in Figure 3, respectively). The final training dataset is assembled by over-sampling or under-sampling individual datasets. The training data size of individual dataset is determined by the weighted number of data instances for each dataset, defined by $$\|\mathcal{D}_{\rm i\,(train)}\|=\frac{w_{i}}{\sum_{j=1}^{N}w_{j}}\times\|\mathcal{D}\|\tag{3}$$ where a set of $N$ conversational datasets $\mathcal{D}=\frac{1}{2}$ where a set of N conversational datasets D = {D*casual*1 , · · · , D*casual*k , DMSPD-PR, DMSPD-NPR}, Diis i-th dataset in D. ## 4.3 Control Of When & What **By Negative** Samples Control of *WHEN* To address the *WHEN* problem, it is important to control a model's propensity to generate a persona-grounded response. Given a persona, an agent must generate personalized responses at the right time to create coherent and natural conversations. Generating persona-grounded responses too frequently leads to unnatural conversations. On the other hand, a model that generates personalized responses too infrequently does not sufficiently enhance a user's engagement with the agent. In particular situations where a persona subset is retrieved by a retrieval model at each turn, the model should generate a casual response instead of generating a personalized response, resulting in a more natural flow. In order to learn this natural flow, we intentionally include a persona subset consisting of all contextually irrelevant persona attributes in the input for non-personalized responses. We call this a negative persona subset augmentation in our study. This augmentation "suppresses" the model's inclination to ground too frequently. However, too much augmentation can hinder the model's ability to ground, so we perform the negative persona subset augmentation only for data in DMSPD-NPR, not all casual datasets D*casual*. Control of *WHAT* When a model generates a persona-grounded response, it needs to determine the *WHAT*, i.e., the specific persona attribute on which to base the response. By providing both the ground-truth persona attributes, ρpos, which are relevant to the response, and "negative" persona attributes, ρneg1*, ..., ρ*negk−1 , which are not relevant to the target response, the model learns to select the appropriate persona attribute(s) from multiple options given the current dialogue context. We refer to the process of adding multiple negative persona attributes to a ground truth persona as negative persona attribute augmentation. Finally, we vary the subset of the persona ρ in (1) for negative persona augmentation depending on the response type: $\rho=\left\{\begin{array}{c}\rho_{\rm mpr}\mbox{for non-personalized response}\in\mathcal{D}_{\rm MSPD-NP}\\ \rho_{\rm pr}\mbox{for personalized response}\in\mathcal{D}_{\rm MSPD-PR}\\ \rho_{c}\mbox{for casual response}\in\mathcal{D}_{\rm casual}\end{array}\right.$ , where ρnpr = {ρneg1 , · · · , ρnegk}, ρpr = {ρpos, ρneg1 , · · · , ρnegk−1}, and ρc = ϕ. ## 4.4 Controllability & Explainability Via Response Type Label Controllability In a commercial setting, it is often necessary to determine whether to generate a personalized response based on business logic. For instance, this might include deciding when the agent should proactively send a message to users. We can exert explicit control over the model's decision regarding the *WHEN* by employing Response Type Labels (RTL), denoted as <RTL>. First, we train the model to generate both a response and corresponding RTL token: P(<RTL>, y|*d, ρ, c*) in (1). We have pre-defined special tokens <PRTL> for personalized response type labels and <CRTL> for casual response type labels. Then, at inference time, we can insert the RTL to generate a response that corresponds to the response type: y ∼ Pθ(·|*d, ρ, c,* <PRTL>) or y ∼ Pθ(·|*d, ρ, c,* <CRTL>). Explainability Error analysis is a crucial element in commercial systems for swift debugging and resolution of issues. However, this process can often be labor-intensive, typically involving a manual review of log data to evaluate the quality and appropriacy of generated personalized responses. Therefore, besides enhancing controllability, we also employ the Response Type Label (RTL) to improve the explainability of the model's generated responses. In this regard, the level of explainability provided by the RTL facilitates easier and more efficient error analysis, leading to improved service operation. ## 5 Experiments 5.1 Experimental Setup To validate the efficacy of our proposed methods in building a controllable Personalized Dialogue (PD) system that addresses the WWH problems, we compare the performance of several models. These are enhanced with fine-tuned baseline models, such as dataset blending and negative sampling methods. Additionally, by comparing models trained with different blending weights, we evaluate the impact of the blending weight on the model's grounding propensity and fluency. The baseline models are all derived from our in-house 18B parameter pretrained language model, which shares the same architecture as GPT-3 (Brown et al., 2020). All experiments are conducted on SKT's proprietary supercomputer, Titan, equipped with NVIDIA A100 SXM4 80GB GPUs. ## 5.2 Evaluation Objective Evaluation We use perplexity (PPL) to measure the fluency of the responses generated by the model. In addition, the F1 score between the persona attributes and the generated response acts as a proxy to evaluate the model's ability to ground. We also calculate the P-coverage score, which measures how well the user persona is reflected in the generated responses (Song et al., 2019). Subjective Evaluation We complement objective evaluation metrics with subjective human evaluation at both the session and turn levels, specifically employing the Sensibleness and Specificity (SS) score rated as either 0 or 1 at the turn level (Adiwardana et al., 2020). Particularly, to analyze the pattern and quality of grounded responses at the turn level, we categorize them according to our proposed four grounding types, which are as follows. First, we assess whether the agent's response, y, is personalized. Second, we categorize y based on two criteria: **grounding level** and **consistency**. Under the **grounding level**, we have two subcategories: 1) *Hard Grounding*, where there's a direct and explicit association between y and the persona attribute, ρpos, characterized by high expressive similarity. 2) *Soft Grounding*, where there's an indirect and implicit association between y and ρpos, marked by low expressive similarity. Under the **consistency** category, we have two subcategories: 1) *Consistent Grounding*, where there's consistency between y and the given ρpos. | Model ID | Method | Dataset | # Attribute | Fluency | Groundness | | |------------|--------------------------------------------|----------------------------|---------------|-----------|--------------|-------| | PPL | F1 | P-Cover | | | | | | Model1 | Base (Positive Only) | MSPDPR + Dcasual | 1 | 11.4 | 0.28 | 0.12 | | Model2 | + Negative Persona Attributes Augmentation | MSPDPR + Dcasual | 5 | 10.97 | 0.15 | 0.07 | | Model3 | + Negative Persona Subset Augmentation | MSPDPR + MSPDNPR + Dcasual | 5 | 9.37 | 0.1 | 0.05 | | Model4 | + RTL Generation | MSPDPR + MSPDNPR + Dcasual | 5 | 8.88 | 0.1 | 0.046 | Table 1: The Results of Objective Evaluation | Model | Blending Weight | Evaluation | | | | | |--------------------------------------|-------------------|--------------|------|------|-------|------| | Dcasual MSPDPR MSPDNPR F1 | P-Cover | PPL | | | | | | 0.94 | 0.5 | 0.1 | 0.14 | 0.06 | 10.46 | | | 0.92 | 0.5 | 0.3 | 0.12 | 0.05 | 10.04 | | | (Negative Persona Subset Aug. Model3 | | | | | | | | + 5 Negative Persona Attribute Aug.) | 0.90 | 0.5 | 0.5 | 0.11 | 0.05 | 9.91 | | 0.87 | 0.5 | 0.8 | 0.1 | 0.05 | 9.33 | | 2) *Inconsistent Grounding*, where there's an inconsistency between y and the given ρpos. ## 5.3 Results 5.3.1 Effect Of Negative Persona Attributes Table 1 illustrates the impact of introducing negative persona attributes on persona-grounded response generation. *Model*1 trained with only one given positive persona attribute shows the highest F1 and PPL scores of 11.4 and 0.28, respectively. On the other hand, *Model*2 trained with negative persona attributes has a PPL of 10.97 and an F1 score of 10.15, which is slightly lower than Model1. Despite the decrease in grounding frequency, the model demonstrates improved response generation by reasonably selecting an appropriate persona attribute given the dialogue context. We hypothesize that the PPL increases because the model learns to distinguish the most suitable persona among several persona attributes in a given context. Furthermore, despite the reduced inclination to ground, we observe that the model can still generate high-quality personalized responses at every turn. ## 5.3.2 Effect Of Negative Persona Subset Table 1 illustrates the effectiveness of the negative persona subset in controlling the *WHEN* problem. Through the application of the negative persona subset, *Model*3 learns to refrain from generating personalized responses when the persona attributes are not appropriate for the given context. In Table 1, *Model*3 demonstrates a decrease in persona grounding and a significant increase in fluency compared to *Model*2, as indicated by the lower PPL, F1, and P-Cover scores (9.37, 01, and 0.05, respectively). We believe the key reason for this enhanced fluency is that the model generates more frequent and natural casual responses to non-personalized turns in the test set, without the need to ground on irrelevant persona subsets. ## 5.3.3 Effects Of Blending Datasets: Trade-Off Between Model Fluency And Grounding As shown in Table 2, there is a trade-off between the model's fluency and tendency to ground. As the weight of the MSPDNPR dataset with negative persona augmentations increases, the F1 score decreases from 0.14 to 0.06, and the P-cover score falls from 0.06 to 0.1 and 0.05. Conversely, the PPL decreases from 10.46 to 9.33. This means that an increase in the number of persona augmented negative samples means the model ground less frequently, leading to a more natural conversation flow with better quality responses. Achieving natural and engaging conversations requires careful consideration of the trade-off between the model's inclination to ground and response fluency. To control the WWH balance, we can adjust the blending weights for datasets with different persona augmentations and select appropriate values for PPL and F1 scores. We set a F1 score of '1' as the minimum threshold for the model's grounding tendency, as we have consistently observed that models with F1 scores below 1 seldom attempt grounding in conversations. This approach ensures that optimal PD systems maintain a balance between a sufficient quantity of grounded responses and a high fluency score. ## 5.3.4 Effect Of Rtl Generation: Enhanced Explainability And Fluency As can be seen in Table 1, based on the F1 score and P-Cover, *Model*4 trained to generate both RTL and personalized responses, demonstrates little difference in tendency to ground when compared to Model3. On the other hand, we found that the PPL score decreased to 8.88. This result is consistent with Kim et al. (2022)'s research, which showed that the quality of generation was enhanced when information related to the target response was generated simultaneously. Model**Session Turn Evaluation Grounding Evaluation** ![6_image_1.png](6_image_1.png) ![6_image_0.png](6_image_0.png) (Turn-level)Hard Grounding Soft Grounding Sub Total Non-personalized Total Consistent Inconsistent Consistent Inconsistent Model3 0.885 0.935 0.848 0.14 (23/162) N/A (0/0) 0.23 (9/40) 1.0 (1/1) 0.16 (33/203) 0.03 (10/297) 0.09 (43/500) Model4 0.885 0.939 0.875 0.13 (16/125) N/A (0/0) 0.17(6/35) N/A (0/0) 0.13 (22/160) 0.03 (11/383) 0.06 (33/543) Table 3: Results of Subjective Evaluation. 1) Session & Turn level Evaluation and 2) Grounding Evaluation: the ratio of the count of bad sensible responses to the count of each grounding type described in 5.2. A bad sensible response means that the response scored a 0 on the "sensible" evaluation. We also evaluated explainability by analyzing whether the generated Response Type Labels (RTL) accurately reflect the model's decisions on persona grounding. For this purpose, we sampled 90 generated responses for each response type. The accuracy of the generated RTL for the casual and the personalized response type wa 96.7% and 98.8%, respectively. This confirms that generating the RTL provides a reliable explanation for the model's decision on the *WHEN* problem. ## 5.3.5 Subjective Grounding Evaluation The high average (over 0.88) scores for both turn and session levels in Table 3 demonstrate that models trained on the high-quality MSPD dataset can generate appropriate responses. In the grounding evaluation, the vast majority of both hard and soft grounding cases demonstrated personaconsistent results. Both models exhibited nearly four times as many hard grounding instances as soft groundings, and they had a lower rate of "badsensible" responses. This suggests that the models are strongly inclined to ground persona information in responses in a manner that is both natural and explicit, given the context. Upon closer examination of "bad-sensible" instances of hard grounding, we found that as the models concentrate more on grounding the persona, responses can sometimes become unnatural within the given context. However, the proportion of "bad sensible" grounding responses was in the 10% range, confirming that the model generally generates high-quality personalized responses. The RTL generation model (*Model*4) shows a lower inclination to ground, yet it had a better badsensible ratio. Therefore, in accordance with the objective evaluation result, we can confirm that generating both the response and the RTL can have a positive effect on fluency, even though there is no significant improvement in terms of session evaluation. ## 5.3.6 Correlation Between Objective And Subjective Evaluations We confirmed a positive correlation between fluency, as measured by PPL, and human sensibleness judgment. *Model*4 exhibited a decrease of 0.49 in PPL compared to *Model*3, indicating improved fluency in Table 1. While session evaluation scores showed no significant differences between the models in Table 3, turn-level grounding evaluation revealed a lower bad sensibleness ratio for personalized/non-personalized turns (0.13 and 0.03, respectively), confirming enhanced sensibleness of *Model*4's responses. We also found a positive correlation between subjective evaluation (i.e., the amount of grounded generation) and the P-Coverage metric used to assess grounding propensity. In Table 1, *Model*4 exhibited a slight decrease in P-Coverage compared to *Model*3. This corresponds to the reduced number (approximately 40) of personalized turns generated by *Model*4 in Table 3, reflecting an actual decrease in the model's grounding propensity. Consequently, considering the cost of subjective evaluation, objective assessment appears feasible for accurately evaluating the model's fluency and grounding tendencies in realworld service operations. ## Conclusion We proposed a method to build a personalized open-domain dialogue system that addresses the WWH problem for natural and engaging conversation through weighted dataset blending (*WHEN*), negative persona subsets (*WHEN*), negative persona attributes (*WHAT*), and the creation of highly curated personalized conversation datasets (HOW). We also demonstrate that generating a response type label (RTL) enhances both the controllability and explainability of model decisions about the WHEN; this is crucial in commercial service. Experimental results show the effectiveness of our proposed methods in addressing and controlling the WWH problem, as seen in both subjective and objective evaluations. ## Acknowledgements We would like to express our sincere gratitude to the members of SK Telecom and A.Tech for their dedicated support throughout this project. Special thanks are extended to the members of the Foundation Modeling team for their technical assistance, meaningful discussions, and contributions to improving the model's performance, training, and deployment. Additionally, we would like to thank the linguists from the Dialogue PO team for their invaluable contributions in generating and evaluating high-quality datasets for model training and improvement. ## References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. *arXiv preprint arXiv:2001.09977*. Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, Woomyoung Park, and Nako Sung. 2022. Keep me updated! memory management in long-term conversations. arXiv preprint arXiv:2210.08750. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. *arXiv preprint arXiv:1811.01241*. Wenhan Dong, Shi Feng, Daling Wang, and Yifei Zhang. 2022. I know you better: User profile aware personalized dialogue generation. In Advanced Data Mining and Applications: 17th International Conference, ADMA 2021, Sydney, NSW, Australia, February 2–4, 2022, Proceedings, Part II, pages 192–205. Springer. Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, and Rui Yan. 2022. There are a thousand hamlets in a thousand people's eyes: Enhancing knowledge-grounded dialogue with personal memory. arXiv preprint arXiv:2204.02624. Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, and Maarten Sap. 2022. Prosocialdialog: A prosocial backbone for conversational agents. arXiv preprint arXiv:2205.12688. Jing Yang Lee, Kong Aik Lee, and Woon Seng Gan. 2021. Generating personalized dialogue via multi-task meta-learning. *arXiv preprint* arXiv:2108.03377. Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. 2020. You impress me: Dialogue generation via mutual persona perception. *arXiv preprint arXiv:2004.05388*. Hedieh Ranjbartabar, Deborah Richards, Ayse Aysin Bilgin, and Cat Kutay. 2021. Do you mind if i ask? addressing the cold start problem in personalised relational agent conversation. In *Proceedings of the* 21st ACM International Conference on Intelligent Virtual Agents, pages 167–174. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic opendomain conversation models: A new benchmark and dataset. *arXiv preprint arXiv:1811.00207*. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. *arXiv preprint arXiv:2004.08449*. Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. arXiv preprint arXiv:1905.12188. Bowen Wu, MengYuan Li, Zongsheng Wang, Yifu Chen, Derek Wong, Qihang Feng, Junhong Huang, and Baoxun Wang. 2019. Guiding variational response generator to exploit persona. arXiv preprint arXiv:1911.02390. Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. *arXiv preprint arXiv:2107.07567*. Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, and Shihang Wang. 2022. Long time no see! open-domain conversation with long-term persona memory. *arXiv preprint* arXiv:2203.05797. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? *arXiv preprint arXiv:1801.07243*. ## A Details Of Mspd Dataset A.1 Statistics Of Mspd | Types # Episodes | 13,469 | |---------------------------------------------|----------| | # Sessions | 53,880 | | # Utterances | 601,062 | | Avg. # turns per session | 11.15 | | Avg. # personalized response per session | 1.90 | | Avg. # user persona per episode | 7.18 | | Avg. # newly aggregated persona per episode | 2.18 | | Avg. length of user utterances | 33.72 | | Avg. length of agent response | 28.10 | Table 4: Statistics of the MSPD Dataset ## A.2 Model Training Settings For the experiments in our study, we fine-tuned an 18B parameter model with the same architecture as GPT-3 (Brown et al., 2020), but with 40 layers, a hidden size of 6144, and 48 attention heads. The model is trained for a single epoch with a micro batch size of 8, using a learning rate of 1.0e-05. To prevent overfitting, a dropout rate of 0.1 and a weight decay of 1.0e-1 are employed. The input sequence length is 1024. The models in Table 1 1 are trained with the blending weight set to 0.85 for D*casual* datasets, 0.7 for MSPDP R, and 0.8 for the MSPD*NP R* dataset. ## B Examples B.1 Example Of Mspd Dataset | User Persona Attributes [Age: 20's / Gender: Female] 𝜌1: 넷플릭스를 즐겨봐요. (I enjoy Netflix) / 𝜌2: 러닝을 좋아해요. (I love running) / 𝜌3: 남자 친구가 있어요. (I have a boy friend) / 𝜌4: 엔지니어로 일하고 있어요. (I work as an engineer) / 𝜌5: 나는 감자탕을 좋아해요. (I like Gamjatang) Session1 Speaker Conversation Grounded persona New persona User 아 일이 너무 많다 ㅠㅠ Ah, I have so much work to do. :( Agent 헉, 그렇게 바빠요? Oh no, are you really busy? 𝜌6: 일이 바빠서 자주 야근을 해요 User 응. 아 오늘도 야근각.. 지겨워 죽겠어. I'm busy with work, so I often work Yeah, I am. I have to work overtime again today... I'm getting tired of it. overtime Agent 힘들어서 어떡해요 ㅠㅠ 식사 제대로 챙겨 드세요. I am so sorry to hear that :( Just make sure you're eating enough, okay? User 응~ 근데 뭘 먹어야 할지도 모르겠어.. yeah ~ but I don't even know what to eat... Agent 그럼 감자탕 어때요? What about Gamjatang? 𝜌5 User 그럴까? 귀찮긴 하지만.. Should I? It's a bit of a hassle... Agent 이럴때 일수록 잘 챙겨 먹어야 해요. ㅠㅠ Especially when you're stressed, you should take care of yourself and eat well :( User 아 집에가서 넷플릭스나 봤으면 좋겠다. Honestly, all I want to do is go home and watch Netflix 𝜌1 Agent 그러게요. ㅠ 진짜 휴식이 필요하겠어요. I know, right? :( You definitely need some rest. Session2 (one day later) Speaker Conversation Grounded persona New persona User 안녕! 벌써 점심시간이네. Hey there! It's already lunchtime. Agent 안녕하세요~ 점심은 드셨어요? 오늘도 감자탕 드신건가요? >< Hello~ Have you had lunch yet? Did you have Gamjatang again today? :) 𝜌5 User ㅋㅋ 그럼 ~ 오늘도 나의 최애 감자탕을 먹었지. Haha, yeah~ I had my favorite Gamjatang again today. Agent ㅎㅎㅎ 회사 근처에 감자탕 맛집이 있나봐요! Hahaha, there must be a great Gamjatang restaurant near your office! User 응응 매일 먹어도 안질려 ㅎㅎ Yup, even if I eat it every day, I never get tired of it. haha Agent 얼마나 맛있으면!! 부러워요! It must be so delicious!! I'm jealous! User 아웅 일하기 싫어 죽겠네 Ugh, I really don't want to work. Agent 직장 생활 너무 힘든 거 같아요 ㅠㅠ 오늘도 야근해요? Working life seems really hard. :( Are you working overtime today too? 𝜌6 User 아니 오늘은 꼭 정시에 퇴근할거야. No, I'm definitely leaving on time today. Agent 오오 드디어!! Oh wow!! User 오늘은 꼭 일찍가서 청소랑 세탁해야지 ㅠ I have to go home early today and do some cleaning and laundry. :( Agent 앗 그럼 쉬는 게 아니잖아요. ㅠㅠㅠ 누구 도와줄 사람 없나요? Oh, but that's not really a break, is it? :( Do you have anyone who can help you? User 난 혼자 살기 때문에 별수없어.. ㅠㅠ 집안일이 밀렸거든 No, I live alone, so I don't have a choice. I have housework piled up. 𝜌7: 나는 혼자 살고 있어요. I live alone Agent 크흡 고생 많아요 ㅠㅠ 오늘은 꼭 빠른 퇴근 하시길!! Aww, that sounds tough. :( Well, I hope you can leave work on time today!! | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Figure 4: A sample of a multi-session conversation in the MSPD | ![10_image_0.png](10_image_0.png) | Conversation | Grounding Type Sensible | Specificity | Retrieved Persona Attribute | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------| | 캘리그라피를 배우기 시작했어요. (I started learning calligraphy.) 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 비서로 일하고 있어요. (I am working as a secretary.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) 영화는 닫힌 결말을 좋아해요. (Movies like closed endings.) | | | | | U: 오늘 내가 쓴 글씨 친구한테 보여줬어 (I showed my friend what I wrote today) | Hard Grounding +Consistent | 1 | 1 | | A: 오˜ 캘리그라피 배운다고 했지? 어땠어? 잘 썼대? (Oh˜ You said you were learning calligraphy, right? how was it? Did you write well?) | 캘리그라피를 배우기 시작했어요. (I started learning calligraphy.) 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 비서로 일하고 있어요. (I am working as a secretary.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) 영화는 닫힌 결말을 좋아해요. (Movies like closed endings.) | | | | U: 응 나 칭찬받았어˜ 글씨 쓰는 거 넘 재밌다 (Yes I was praised˜ Writing is so much fun) | Non-Personalized | 1 | 1 | | A: 오˜˜ 소질이 있나본데! (Oh˜˜ You must have a talent!) | 캘리그라피를 배우기 시작했어요. (I started learning calligraphy.)비서로 일하고 있어요. (I am working as a secretary.) 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) 영화는 닫힌 결말을 좋아해요. (Movies like closed endings.) | | | | U: 열심히 했거든 ㅋㅋ | | | | | (I worked hard lol) | Non-Personalized | 1 | 1 | | A: 그럼 이제 뭐 할 거야? (So what are you going to do now?) | 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 캘리그라피를 배우기 시작했어요. (I started learning calligraphy.) 비서로 일하고 있어요. (I am working as a secretary.) 차돌짬뽕을 좋아해요. (I like chadol jjambbong.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) | | | | U: 연습 좀 더 하다가 친구 만나러 갈까해 (I'm going to practice some more and then go see a friend) | Hard Grounding +Consistent | 1 | 1 | | A: 그 친구랑 또 만나는거야? (Are you going to meet that friend again?) | 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 차돌짬뽕을 좋아해요. (I like chadol jjambbong.) 비서로 일하고 있어요. (I am working as a secretary.) 영화는 닫힌 결말을 좋아해요. (Movies like closed endings.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) | | | | U: 응 종종 만나잖아 ㅋㅋ | | | | | (Yes, we meet often.) | Hard Grounding +Consistent | 1 | 1 | | A: 그렇구나 오늘도 너가 좋아하는 차돌 짬뽕 먹으러 가는거 아냐? (That's right lol ˜ Isn't today also going to eat your favorite chadol jjambbong?) | 진짜 친한 친구 한 두명 정도랑만 만나요. (I only see one or two really close friends.) 차돌짬뽕을 좋아해요. (I like chadol jjambbong.) 비서로 일하고 있어요. (I am working as a secretary.) 셜록홈즈 팬이에요. (I'm a Sherlock Holmes fan.) 영화는 닫힌 결말을 좋아해요. (Movies like closed endings.) | | | | U: 그럼 좋지˜ (then it's good) | Non-Personalized | 1 | 1 | | A: 맛있게 먹고 와 :) (Have a good meal :)) | | | | | Figure 6: Example of generations of our personalized dialogue model with subjective evaluation for hard grounding. | | | | Figure 6: Example of generations of our personalized dialogue model with subjective evaluation for hard grounding. Blue colored texts are persona-grounded responses and persona attributes on which the model grounds the response Figure 7: Example of subjective evaluation for soft grounding and fail cases ## B.3 Deployment Tool: Sanity Testing ![12_image_0.png](12_image_0.png)
bhattacharya-etal-2023-cupid
{CUPID}: Curriculum Learning Based Real-Time Prediction using Distillation
https://aclanthology.org/2023.acl-industry.69
Relevance in E-commerce Product Search is crucial for providing customers with accurate results that match their query intent. With recent advancements in NLP and Deep Learning, Transformers have become the default choice for relevance classification tasks. In such a setting, the relevance model uses query text and product title as input features, and estimates if the product is relevant for the customer query. While cross-attention in Transformers enables a more accurate relevance prediction in such a setting, its high evaluation latency makes it unsuitable for real-time predictions in which thousands of products must be evaluated against a user query within few milliseconds. To address this issue, we propose CUPID: a Curriculum learning based real-time Prediction using Distillation that utilizes knowledge distillation within a curriculum learning setting to learn a simpler architecture that can be evaluated within low latency budgets. In a bi-lingual relevance prediction task, our approach shows an 302 bps improvement on English and 676 bps improvement for low-resource Arabic, while maintaining the low evaluation latency on CPUs.
# Cupid: Curriculum Learning Based Real-Time Prediction Using Distillation Arindam Bhattacharya Amazon, India aribhat@amazon.com Vijay Huddar Amazon, India vhhuddar@amazon.com ## Abstract Ankith MS Amazon, India ankiths@amazon.com Atul Saroop Amazon, India asaroop@amazon.com Ankit Gandhi Amazon, India ganankit@amazon.com ## Rahul Bhagat Amazon, USA rbhagat@amazon.com Relevance in E-commerce Product Search is crucial for providing customers with accurate results that match their query intent. With recent advancements in NLP and Deep Learning, Transformers have become the default choice for relevance classification tasks. In such a setting, the relevance model uses query text and product title as input features, and estimates if the product is relevant for the customer query. While cross-attention in Transformers enables a more accurate relevance prediction in such a setting, its high evaluation latency makes it unsuitable for real-time predictions in which thousands of products must be evaluated against a user query within few milliseconds. To address this issue, we propose **CUPID**: a CUrriculum learning based real-time PredIction using D*istillation* that utilizes knowledge distillation within a curriculum learning setting to learn a simpler architecture that can be evaluated within low latency budgets. In a bi-lingual relevance prediction task, our approach shows an 302 bps improvement on English and 676 bps improvement for low-resource Arabic, while maintaining the low evaluation latency on CPUs. ## 1 Introduction Large-scale e-commerce search systems, such as those used by companies like Amazon, Walmart etc., typically employ a multi-step process to retrieve relevant products for a given query (Guo et al., 2022). The first step in this process is to generate a matchset that is approximately relevant to the query, followed by a series of steps that optimize for relevance, customer interest and other associated metrics (Momma et al., 2022). In such a setting, it is imperative to have features that accurately capture relevance between the customer's query-intent and the candidate set of products in the matchset. Recently, transformer-based models such as BERT have proven to be highly effective in estimating relevance between a query and a product by using cross-attention (Mangrulkar et al., 2022a; Nogueira and Cho, 2019; Wang et al., 2019), self-attention (dual-encoders) (Bhattacharya et al., 2023; Reimers and Gurevych, 2019a; Mangrulkar et al., 2022b), or late interaction (Khattab and Zaharia, 2020a; Santhanam et al., 2021; Lu et al., 2022) models. The use of cross-attention in transformers has been shown to be effective, as it allows the model to take into account both the query and the product when determining relevance (Menon et al., 2022; Hofstätter et al., 2020). However, these models come at a cost, as they require heavy computational resources and have a high latency even during evaluation. In large-scale search systems, it is important to perform real-time relevance predictions, as thousands of products need to be processed for each query, with strict latency requirements. For this reason, dual-encoder models are more suitable, as they can provide real-time relevance predictions with low latency requirements. In such a setting, the task of estimating relevance is reduced to carrying out simple vector operations, typically a dot product of high-dimensional vectors, one representing the query and the others representing products. Under such a setting, the vectors representing products are pre-computed and cached, while those for the query are computed on-the-fly. Such on-thefly computation of query vectors or embeddings in low latency settings restricts us from using a full stack of transformer layers, as is typical to models like BERT. To address the issue of latency in computation of query embeddings, we instead borrow from the architecture used in (Nigam et al., 2019) to be used for the query arm for our asymmetric dual-encoders, while continuing to use a full stack of transformers for the product arm (as shown in Figure 1). Their simple architecture is comprised of a word embedding layer and a mean-pool layer (based on (Huang et al., 2013a), referred loosely as 720 DSSM (Deep Structured Semantic Model) henceforth), which is more suitable for real-time scenarios with low latency requirements. However, this model lacks the rich semantic representation of models built purely of transformers. To bridge this gap, we leverage knowledge distillation techniques, where we use the DSSM of the query arm as a student model to learn the rich semantic representation from the transformer model. To address this issue, we propose **CUPID**: a Curriculum learning based real-time prediction using distillation that utilizes knowledge distillation within a curriculum learning setting to learn a simpler architecture that can be evaluated within low latency budgets. Our contributions to the literature can be summarized as follows: - We show that our low-latency model benefits more through knowledge distillation from a structurally similar dual-encoder transformer model as a teacher, rather than from a cross-encoder transformer model. Even though the cross-encoder transformer model is more accurate, the student is able to learn better from a structurally similar teacher. - We demonstrate that a learning regime, where a structurally similar student, optimizes for a crossentropy loss for the first few epochs, followed by a curriculum-styled learning of the teacher embeddings using an alignment loss outperforms other alternative learning regimes. ## 2 Related Work Cross Encoders and Bi-Encoders Cross encoders and bi-encoders are two distinct architectures used in sentence pair modeling. While both approaches aim to capture the relationship between two sentences, they differ in how they encode the sentences and produce their representations. Cross encoders (Reimers and Gurevych, 2019b) jointly encode both sentences into a fixed-length representation. The shared encoder takes a pair of sentences as input and captures the interaction between them. This joint encoding helps capture both local and global interactions between the sentences, leading to improved representation learning. Bi-encoders (Kiros et al., 2015), on the other hand, use separate encoders for each input sentence. Each sentence is encoded independently, producing two separate representations. These representations are then compared using a similarity function (e.g., dot product, cosine similarity) to determine the ## Relationship Between The Sentences. Both cross encoders and bi-encoders have their advantages and are suitable for different scenarios. Cross encoders excel at capturing the interaction between sentences, while bi-encoders are computationally efficient. The choice between the two architectures depends on the specific requirements. For real-time predictions, cross encoders are often infeasible, but bi-encoders excel due to the ability to utilize pre-computed indexes. Low latency Transformers In recent years, with the state-of-the-art performance of transformers in NLP applications, there has been a demand to make transformers suitable for real-time e-commerce applications. And the reduction in computation time and latency is crucial for transformers to be viable for such use cases. There are three main themes (Lin et al., 2022) into which the advances in the design of low-latency transformers can be categorised. (I) architectural level, (II) component level, and (III) technique-based. At the architectural level (I), modifications are made at a higher level, such as the use of lightweight transformers like Funnel Transformer (Dai et al., 2020), Lite Transformer (Wu et al., 2020), and DeLighT (Mehta et al., 2020). Additionally, pruning techniques reported in (Kwon et al., 2022; Gordon et al., 2020; Mao et al., 2020; Hou et al., 2020) aim to reduce the size and computation by eliminating unimportant weights. Finally, Quantization-based approaches (Ganesh et al., 2021) and model compression (Bai et al., 2019) are used to compress weights and activations. At the component level (II), there is a focus on efficient self-attention, such as in the works of (Wang et al., 2020), and delayed interaction networks (Reimers and Gurevych, 2019c; Khattab and Zaharia, 2020b; Santhanam et al., 2021). Lastly, under the category of technique-based (III), research efforts have been made in areas such as early exit (Mangrulkar et al., 2022a; Zhou et al., 2020; Xin et al., 2020) and knowledge distillation (Hinton et al., 2015; Sanh et al., 2020). Knowledge Distillation Knowledge Distillation (KD)(Hinton et al., 2015) is a widely researched topic that enables the transfer of knowledge from a complex, pre-trained model to a smaller, more computationally efficient model. With the rise of BERT (Devlin et al., 2018) and the corresponding growth in textual data, there has been growing interest in applying KD to BERT models in the field of NLP. Distill BERT (Sanh et al., 2020) is one such seminal work along with (Tinybert(Jiao et al., 2020), Fast BERT(Liu et al., 2020), Task specific BERT (Tang et al., 2019), Patient Knowledge Distillation BERT (Sun et al., 2019a). These propose using KD to transfer knowledge from a large BERT model to a smaller model. While the Distill BERT makes BERT models faster by 60%, they cannot be used in real-time because e-commerce applications such as semantic matching (Huang et al., 2013b; Nigam et al., 2019) have limited computational resources (especially GPUs) and strict latency requirements. The latest TwinBERT (Lu et al., 2020) proposes the distillation of 12-layer BERT to 6-layer twin tower BERT structure, thus, permitting pre-compute of document embeddings and cache in memory saving additional computational time. However, the TwinBERT requires GPUs during inference to compute the query arm embeddings within the latency budgets. ## 3 Our Approach In this section, we present our proposed approach CUPID that uses bi-encoder transformer models as teachers, and learn asymmetric student models having DSSM architecture in the query side and transformer architecture in the product side. Using cross-encoders as teachers is natural, however, in this work we show that using bi-encoder teacher yields better performance and allows better transferring of knowledge to bi-encoder student models (refer to Section 4.3). Bi-encoder teacher model enables better transfer of semantics to a simpler bi-encoder student model with better alignment of query embeddings. Later in the section, we present a very simple curriculum learning framework for training bi-encoder student model by progressively increasing the difficulty of task. We present the detailed architecture (also shown in Figure 1) below. ## 3.1 Teacher Model The teacher model used is a Siamese BERT (SBERT) architecture. SBERT first computes fixed size contextual representation for an entity by *mean* pooling the BERT model's output, followed by a dense layer with tanh activation to get entity embedding. We use the same BERT model for representing both query and product to enable the transfer of language semantics between them. Finally, the similarity yˆi between entities is determined by the cosine distance between the embeddings. Note that, although we use BERT, any transformer model can act as an alternative to the BERT model. In Section 4, we present results on some multilingual datasets, where we use the multilingual XLM RoBERTa transformers. It should also be noted that while cross-encoder models are widely used as teacher models in knowledge distillation tasks for NLP, for our application, we claim that a bi-encoder model is much better suited for the task of teaching an asymmetric bi-encoder student. Section 4.3 justify this claim. Teacher Training Objective Let's assume our training samples are represented by the tuple (qi, pi, yi), where qiis a query entity and piis a product entity, and yiis the ground truth label. Let S(·) be the function returning the embedding Embedding*T ransf ormer*. Then the predicted semantic similarity between the query and the product is measured using the cosine similarity as yˆ = sim(*q, p*) = cos(S(p), S(q)). We train the teacher model using the binary cross entropy loss, computed as lossce(ˆ*y, y*) = y · log(ˆy) + (1 − y) · log(1 − yˆ). ## 3.2 Student Model In the teacher model, both arms use SBERT (or any transformer) models to generate query and product representations. While SBERT generates better representations of the entities, the latency of such large models are prohibitively high for real-time representation generation for queries. We therefore train a smaller model for query representation, and retain the SBERT model for product representation generation, which can be computed and indexed in advance, and do not require real-time latency. The high-level architecture of the query arm is similar to that of SBERT. The only difference is that we use embedding lookup and mean-pool layer instead of BERT encoder to generate the intermediate representation for query. We use the teacher BERT model to generate embedding for the product. Student Model Training Here we discuss how we distill the knowledge from the query arm of the teacher SBERT models to the smaller student models. The standard way to distill knowledge from teacher to student is to use a loss function over the predicted similarities between the query and the product of teacher model, yˆT and student model, yˆS (Sanh et al., 2020), that is, the student tries ![3_image_0.png](3_image_0.png) to imitate the final output of the teacher. We call this loss the *imitation* loss, which is computed as loss*imitation*(ˆyT , yˆS) = −yˆT log(ˆyS). In this work, we use a loss which tries to align the representation generated by the student to that of the teacher similar to (Sun et al., 2019b; Li et al., 2021). We name this loss *alignment* loss. Let D(·) be the function returning the embedding Embedding*DSSM* generated by the student model. Then the alignment loss for a query q is computed as lossalignment(S(q), D(q)) = 1−cos(S(q), D(q)). In our experiments, we noted that if we introduce lossce in the student model, it performs better. Furthermore the loss*imitation* becomes superfluous, that is removing it doesn't affect the model performance. Section 4 shows the effect of using all three losses, and shows that adding imitation loss does not add to the model performance and removing it does not impact have any negative impact, but simplifies the training process by removing a component of the loss. Curriculum Learning for Student Models Initially, the query arm of the student model is misaligned due to random initialization, which causes instability in learning. To address this, we perform two stage curriculum learning. In the first stage, we train the student model for few epochs using the positive training data and *random negative* data. The random negatives are generated by randomly shuffling the products forming pairs (qi, pj ) such that i ̸= j. This approach provides the model with easy examples compared to the negatives present in the dataset itself. This method initializes the DSSM weights to be more aligned to generate the expected query representation. In Section 4, we show that this kind of training stabilizes the training and improves the performance. In the second stage of curriculum learning, we scale up the alignment loss gradually during the model training. So initially, α is 0, and the model is effectively learning only from true labels. Gradually we scale up α, making the models objective more complicated: reduce the cross entropy loss with true labels, and align the student's query arm with that of the teacher to generate similar embeddings. The progressive increase in model's task defines this stage of curriculum learning rather than the difficulty of examples. With these improvements, we now arrive at the proposed knowledge distillation loss: $$=(1-\alpha)\cdotp$$ $${\mathfrak{s o e}}\uparrow\uparrow$$ lossKD = (1 − α) · lossce + α · loss*alignment*, where α is the scaling factor that varies from 0 to 1 and is incremented each epoch. ## 4 Experiments And Results In this section, we compare the performance of CUPID with the state of the art methods of knowledge distillation. We also study the latency of the student models and compare them with the latency of teacher model to show why BERT based models are not suitable for real time predictions. We then present results for a real world application of CUPID on an internal dataset. Finally we show the effect each of the losses have on the performance of the models, and explain the choice of a bi-encoder teacher model compared to a better performing cross-encoder model. ## 4.1 Datasets We performed the experiments on the Shopping Queries Dataset (Reddy et al., 2022) that was made openly available as part of KDD Cup 2022 workshop. The training dataset has around 800 thousand samples and the test dataset has around 400 thousand. For each query, the training dataset provides on average a list of up to 13 potentially relevant results, together with relevance judgements (Exact, Substitute, Complement, Irrelevant) indicating the relevance of the product to the query. For our experiments, we consider the problem as a binary classification where a product is either relevant (exact, substitute or complement) or irrelevant. Both training and test dataset contains around 1 negative example per 10 positive examples. We also present the results of CUPID and the baselines on datasets from Arabic language locales that have been subsampled from a leading e-commerce company's history log and human-audited for ESCI labels, similar to the work of(Reddy et al., 2022). At this time, the data is not publicly available and is proprietary. We used a few hundred thousand records to train and test the models. ## 4.2 Experiments We implement the CUPID and the baseline models described in Section 3 using PyTorch (Paszke et al., 2019) library and Hugging Face (Wolf et al., 2019) transformers. Teacher Models Both the biencoder model and the cross-encoder model uses bert-base-uncased as a pretrained model, with a dense layer of size 128 to generate both query and product representation. The teacher is trained for 10 epochs using Adam optimizer with exponentially decaying learning rate. We use a batch size of 256 to fit the model into GPU memory. Due to the imbalance inherent in the dataset, we use two standard approaches to stabilize the training. First, we accumulate the gradients across 10 batches before applying the Adam updates. Second, we use a weighted sampler while loading the training batch to ensure that we over-sample the negatives to retain a balanced class distribution. Student Models The student models use the same BERT arm to generate the product representation as the teacher. For the query generation, the ID Experiment CL Stage AUC (%) T Teacher I 87.25 B Student: CE (No KD) I 83.15 S1 Student: IL I 84.53 S2 Student: IL + AL I 84.81 S3 Student: CE + AL I 85.32 S4 Student: CE + IL I 84.59 S5 Student: CE + IL + AL I 84.80 H CUPID I & II **86.17** use DSSM layer with a dense layer of size 128, same as that of the BERT arm. The student models are also trained for 10 epochs, and uses the same optimizers and schedulers as teacher. The weights of the product arm are frozen and only the query arm is trained. Metrics We use area under the ROC curve to compare the results of the baseline models with CUPID. This allows us the compare the performance of the model without forcing a choice of desired precision or recall, which may vary based on the requirements. In addition, AUROC also gives an indication of the probability of a negative being ranked higher than a positive, which is an important information when dealing with applications such as product search. ## 4.2.1 Results We now compare the results of CUPID and the baselines on our dataset. Table 1 shows the area under the ROC curve for the models. The teacher model (T) achieves an AUC of 87.25%. For baseline (B), we show the performance of a model with DSSM for query arm and a BERT for product arm that is trained only using the training data, with no knowledge distillation. As expected, its performance falls short of the teacher. We then show the effects of various losses described in Section 3. S1 uses only the imitation loss. S2, which adds alignment loss increases the performance by around 50 basis points. Replacing imitation loss with BCE in S3 gives us approximately another 50 basis points boost. The S4 configuration shows us that using BCE and imitation loss together doesn't provide significant boost over just imitation. Similar observation is made in S5 which is presented for completeness. There we notice that addition BCE to S2 complicates the model and performs no better. | ID | Experiment | CL Stage | AUC (%) | |------|-----------------------|------------|-----------| | T | Teacher | I | 89.04 | | B | Student: CE (No KD) | I | 72.41 | | S1 | Student: IL | I | 74.63 | | S2 | Student: IL + AL | I | 76.14 | | S3 | Student: CE + AL | I | 76.59 | | S4 | Student: CE + IL | I | 75.02 | | S5 | Student: CE + IL + AL | I | 74.80 | | H | CUPID | I & II | 79.17 | Finally, CUPID with the weighted loss achieves the best results with over 300 basis points boost over the baseline with no knowledge distillation and more than 150 basis points above the standard KD method using imitation loss. Latency Latency is a major concern for real time predictions. Here we compare the query arm latency of the teacher and student models to justify the need of a DSSM based students at the cost of some performance. We convert the models to ONNX before performing the inference. BERT has a latency of 11.6ms, which is almost 4× that of the DSSM students, which is 3.2ms. In real time, 11ms is higher than the standard expected latency for real time applications. While the product representations can be pre-computed, this latency explains the need for a student model to generate the query representation. All the experiments to compute the latency was carried out for CPU on Amazon EC2 machines using p3.8xlarge instances with 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 processors. ## 4.2.2 Real World Application: Irrelevant Result Detection In Arabic Locales Here we present the results of our method on the real world Arabic data. For this experiment, we trained the teacher model with pre-trained xlm-roberta model. This transformer model is trained on multiple languages and thus is more suitable for Arabic language data. The remaining parameters and the architecture of student models remain same. Table 2 shows the results of various methods on the Arabic data. Because of smaller size of dataset, larger vocabulary, and larger transformer model, the performance difference between the teacher and the students is larger. Also, the Table 3: Comparison with Cross-Encoder (CE) teacher Table 4: Impact of Curriculum Learning improvement of CUPID over the baselines is more pronounced for this dataset. ## 4.3 Ablation Studies | ID | Experiment | CL Stage | AUC (%) | |------|-----------------|------------|-----------| | B- | Student: No KD | None | 82.09 | | B | Student: No KD. | I | 83.15 | | H- | CUPID | II | 85.65 | | H | CUPID | I & II | 86.17 | | ID | Experiment | AUC (%) | |------|--------------------------|-----------| | T | BiEncoder Teacher | 87.25 | | CT | CE Teacher | 89.76 | | H | CUPID: BiEncoder Teacher | 86.17 | | CH | CUPID: CE Teacher | 81.53 | In this section we look at the impact of some of the choices made by CUPID, studying their impact and comparing with the alternative approaches. ## Why Is Curriculum-Based Learning Important To The Cupid Model? Table 4 Shows The Effect Of curriculum learning for the baseline and CUPID. B- and H- are the versions of baseline and CUPID without curriculum learning respectively. We notice that in each case, curriculum learning gives us a significant improvement in performance. ## What Is The Role Of The Α**? How Important Is** The Alpha Scheduler? The Parameter Α Is Used to adjust the importance of the alignment loss in the CUPID model. In our experiments on nonEnglish datasets, we found that a constant α could achieve an AUC of 85.32%. We hypothesized that the DSSM model, due to its basic architecture compared to the transformer model, would have difficulty learning the projection matrix and classification task jointly. Hence, by gradually increasing α, we improved the model performance to 86.17%. We also note that the improvement is not due of the trade-off of achieving better results by letting the model have higher alignment loss, but α scaling actually helps reduce the alignment loss faster than when the alignment loss is not scaled, which justifies the hypothesis. Why Bi-Encoder teacher and not cross-encoder? Cross encoder models are known to perform better in similarity matching tasks in NLP. So a natural question arises: why not train a cross-encoder model and use that as a teacher. Here we show that the cross encoder model performs better than the bi-encoder student. But due to the requirement of separate generation and indexing of query and product for real time predictions, we require a biencoder student. And due to the difference in semantics, the transfer of knowledge between a cross encoder teacher and a biencoder student doesn't yield better results than with a biencoder teacher, as seen in Table 3. We notice that cross encoder teacher (CT) model performs better that biencoder teacher (T) but 250 basis points but the student trained with the bi-encoder teacher (H), which has access to alignment loss, greatly outperforms the student with cross encoder teacher (CH), which has access to only imitation loss which has a different semantics. ## 5 Conclusion In this paper, we propose CUPID, a novel approach for knowledge distillation for real-time prediction of relevancy using curriculum learning. It uses a new loss function and two-stage curriculum learning framework to increase the influence of teacher gradually, resulting in a loss function that outperforms imitation learning based KD methods by up to 300 basis points. ## References Junjie Bai, Fang Lu, Ke Zhang, et al. 2019. Onnx: Open neural network exchange. https://github. com/onnx/onnx. Arindam Bhattacharya, Ankit Gandhi, Vijay Huddar, MS Ankith, Aayush Moroney, Atul Saroop, and Rahul Bhagat. 2023. Beyond hard negatives in product search: Semantic matching using one-class classification (smocc). In *WSDM 2023*. Zihang Dai, Guokun Lai, Yiming Yang, and Quoc Le. 2020. Funnel-transformer: Filtering out sequential redundancy for efficient language processing. *Advances in neural information processing systems*, 33:4271–4282. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, and Marianne Winslett. 2021. Compressing large-scale transformer-based models: A case study on bert. *Transactions of the Association for* Computational Linguistics, 9:1061–1080. Mitchell A Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing bert: Studying the effects of weight pruning on transfer learning. *arXiv preprint* arXiv:2002.08307. Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2022. Semantic models for the first-stage retrieval: A comprehensive review. ACM Trans. Inf. Syst., 40(4). Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. 2020. Improving efficient neural ranking models with crossarchitecture knowledge distillation. Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. 2020. Dynabert: Dynamic bert with adaptive width and depth. *Advances in Neural* Information Processing Systems, 33:9782–9793. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013a. Learning deep structured semantic models for web search using clickthrough data. In *Proceedings of the 22nd ACM* International Conference on Information and Knowledge Management, CIKM '13, page 2333–2338, New York, NY, USA. Association for Computing Machinery. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013b. Learning deep structured semantic models for web search using clickthrough data. In *Proceedings of the 22nd ACM* international conference on Information & Knowledge Management, pages 2333–2338. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling bert for natural language understanding. Omar Khattab and Matei Zaharia. 2020a. Colbert: Efficient and effective passage search via contextualized late interaction over bert. Omar Khattab and Matei Zaharia. 2020b. Colbert: Efficient and effective passage search via contextualized late interaction over bert. Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. *Advances in* neural information processing systems, 28. Woosuk Kwon, Sehoon Kim, Michael W Mahoney, Joseph Hassoun, Kurt Keutzer, and Amir Gholami. 2022. A fast post-training pruning framework for transformers. *arXiv preprint arXiv:2204.09656*. Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, and Qi Zhang. 2021. Adsgnn: Behavior-graph augmented relevance modeling in sponsored search. Tianyang Lin, Yuxin Wang, Xiangyang Liu, and Xipeng Qiu. 2022. A survey of transformers. *AI Open*. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, and Qi Ju. 2020. Fastbert: a selfdistilling bert with adaptive inference time. arXiv preprint arXiv:2004.02178. Wenhao Lu, Jian Jiao, and Ruofei Zhang. 2020. Twinbert: Distilling knowledge to twin-structured bert models for efficient retrieval. Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, and Haifeng Wang. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval. Sourab Mangrulkar, Ankith M S, and Vivek Sembium. 2022a. Be3r: Bert-based early-exit using expert routing. In *KDD 2022*. Sourab Mangrulkar, Ankith M S, and Vivek Sembium. 2022b. Multilingual semantic sourcing using product images for cross-lingual alignment. In The Web Conference 2022. Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, and Jing Bai. 2020. Ladabert: Lightweight adaptation of bert through hybrid model compression. arXiv preprint arXiv:2004.04124. Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2020. Delight: Deep and light-weight transformer. *arXiv* preprint arXiv:2008.00623. Aditya Menon, Sadeep Jayasumana, Ankit Singh Rawat, Seungyeon Kim, Sashank Reddi, and Sanjiv Kumar. 2022. In defense of dual-encoders for neural ranking. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15376–15400. PMLR. Michinari Momma, Chaosheng Dong, and Yetian Chen. 2022. Multi-objective ranking with directions of preferences. In *SIGIR 2022 Workshop on eCommerce*. Priyanka Nigam, Yiwei Song, Vijai Mohan, Vihan Lakshman, Weitian, Ding, Ankit Shingavi, Choon Hui Teo, Hao Gu, and Bing Yin. 2019. Semantic product search. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Chandan K. Reddy, Lluís Màrquez, Fran Valero, Nikhil Rao, Hugo Zaragoza, Sambaran Bandyopadhyay, Arnab Biswas, Anlu Xing, and Karthik Subbian. 2022. Shopping queries dataset: A large-scale ESCI benchmark for improving product search. *arXiv*. Nils Reimers and Iryna Gurevych. 2019a. Sentencebert: Sentence embeddings using siamese bertnetworks. Nils Reimers and Iryna Gurevych. 2019b. Sentencebert: Sentence embeddings using siamese bertnetworks. *arXiv preprint arXiv:1908.10084*. Nils Reimers and Iryna Gurevych. 2019c. Sentencebert: Sentence embeddings using siamese bertnetworks. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2021. Colbertv2: Effective and efficient retrieval via lightweight late interaction. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for bert model compression. *arXiv preprint arXiv:1908.09355*. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019b. Patient knowledge distillation for bert model compression. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from bert into simple neural networks. *arXiv preprint arXiv:1903.12136*. Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Jiangnan Xia, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pretraining for deep language understanding. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. *ArXiv*, abs/1910.03771. Zhanghao Wu, Zhijian Liu, Ji Lin, Yujun Lin, and Song Han. 2020. Lite transformer with long-short range attention. *arXiv preprint arXiv:2004.11886*. Ji Xin, Raphael Tang, Jaejun Lee, Yaoliang Yu, and Jimmy Lin. 2020. Deebert: Dynamic early exiting for accelerating bert inference. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: Fast and robust inference with early exit.
faustini-etal-2023-answering
Answering Unanswered Questions through Semantic Reformulations in Spoken {QA}
https://aclanthology.org/2023.acl-industry.70
Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems. Users ask questions via spontaneous speech that can contain disfluencies, errors, and informal syntax or phrasing. This is a major challenge in QA, causing unanswered questions or irrelevant answers, leading to bad user experiences. We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity. We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering. Offline evaluation on 1M unanswered questions from a leading voice assistant shows that SURF significantly improves answer rates: up to 24{\%} of previously unanswered questions obtain relevant answers (75{\%}). Live deployment shows positive impact for millions of customers with unanswered questions; explicit relevance feedback shows high user satisfaction.
# Answering Unanswered Questions Through Semantic Reformulations In Spoken Qa Pedro Faustini1∗ , Zhiyu Chen2, Besnik Fetahu2, Oleg Rokhlenko2**, Shervin Malmasi**2 1Macquarie University, Sydney, NSW, Australia 2Amazon.com, Inc., Seattle, WA, USA pedro.arrudafaustini@hdr.mq.edu.au {zhiyuche,besnikf,olegro,malmasi}@amazon.com ## Abstract Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems. Users ask questions via spontaneous speech which can contain disfluencies, errors, and informal syntax or phrasing. This is a major challenge in QA, causing unanswered questions or irrelevant answers, and leading to bad user experiences. We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity. We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering. Offline evaluation on 1M unanswered questions from a leading voice assistant shows that SURF significantly improves answer rates: up to 24% of previously unanswered questions obtain relevant answers (75%). Live deployment shows positive impact for millions of customers with unanswered questions; explicit relevance feedback shows high user satisfaction. ## 1 Introduction Question Answering (QA) is a longstanding NLP task, and voice assistants like Alexa have made Spoken QA ubiquitous. Users often address such assistants with spontaneous speech, as they would a human. However, differences between spoken and written language (Chafe and Tannen, 1987), such as the presence of disfluencies, informal or incomplete speech, and different syntax have been shown to pose challenges for NLP tasks (Ward, 1989; Shriberg, 2005; Salesky et al., 2019). QA system mostly use written data, and such phenomena impact question understanding and answer retrieval (Gupta et al., 2021), leading to irrelevant answers or unanswered questions, leaving users unsatisfied. Recently, language generation has been used to improve QA through Question Rewriting (QR). For example, QR is used in conversational systems ∗Work done during an internship at Amazon. fl ![0_image_0.png](0_image_0.png) Figure 1: Examples of challenging questions (Q) and our proposed reformulation operations (R) on them. to answer contextual questions in multi-turn dialogues (Ye et al., 2022). While QA models can be improved with fine-tuning, real-world systems have multiple QA backends and retraining is expensive, making input rewriting a practical solution (Chen et al., 2022). This has the added benefit that a single QR model may improve multiple QA systems. We propose applying QR to reformulate difficult or unanswered questions. We analyzed millions of answered and unanswered real-world questions from a leading voice assistant to understand the factors impacting QA failure (§3). In addition to the well-known issue of disfluencies, we identify novel challenges from question structure and specificity. To address them, we propose three linguisticallyinformed reformulation operations that only require the question (§4). The operations, shown in Figure 1, are designed to *improve answerability*1 based on common speech patterns, so that for a previously unanswered question, the same QA system is able to provide an answer for its reformulation. 1We consider a question answerable if a QA system decides with sufficient confidence that a suitable answer is retrieved. 729 While question repair has been studied, our *root* wh- and *question generalization* operators are novel contributions of this work. Our results demonstrate that our approach can achieve: 1. high reformulation accuracy of 83% for rewriting questions to a desired shape (§6.1); 2. improving the answer rate of previously unanswered questions by up to 24% (§6.2); and 3. 75% of answers on reformulated questions are relevant to the original question (§6.3). Live deployment of our model (§6.4) achieves positive impact for millions of users with unanswered questions, and explicit relevance feedback from customers shows high satisfaction. ## 2 Related Work Question Quality: QA models are typically trained on formal written language, and are known to be impacted by the quality of user questions. An analysis of the WikiAnswer dataset (Fader et al., 2014) by Liu et al. (2019) showed that 68% of the questions were ill-formed, usually due to wrong words, wrong order, or background noise, harming the answerability of those questions. Gupta et al. (2021) examined the impact of disfluencies in QA, showing that they had a large impact on answering performance. Many of these issues stem from natural properties of spontaneous speech, such as errors, self corrections, and informal syntax (Chafe and Tannen, 1987). Our work tackles these issues, and tries to go beyond corrections by considering question types and question specificity. Question Complexity: Depending on the QA system, some questions may be more difficult to answer. It has been shown that questions requiring multi-hop reasoning are more challenging (Yang et al., 2018), often leading to no answers or wrong answers. Questions are affected by the broader types of syntactic complexity explored in the field (Nassar et al., 2019; Martin et al., 2020; Sheang and Saggion, 2021). Regardless of complexity, questions may also be unanswerable due to incorrect framing or false suppositions (Kim et al., 2021). Other work has analyzed questions in different datasets, showing that *wh-** words (e.g. who, what, when) are the dominant way to start a question (Ko et al., 2020), and that these words and related phrases (e.g., "*how much*", "*how large*") are associated with reduced answering complexity (Chali and Hasan, 2012). In our work we consider how controlled syntactic restructuring can address the above challenges to reduce answering complexity. Question Rewriting: Rewriting questions is a natural extension of query reformulation approaches used to improve Information Retrieval (He et al., 2016). Question rewriting has been applied to improve QA in different ways. Question paraphrasing has been used as a data augmentation approach to retrain QA systems to improve robustness (Gan and Ng, 2019). Buck et al. (2018) propose using a reinforcement learning agent between the original question and a black box QA system. The agent probes the QA system with several reformulations to learn how to elicit the best answer. Liu et al. (2019) propose a question refinement system to rewrite malformed questions. Rewriting Operations: Text rewriting is based on specific linguistic changes. Nassar et al. (2019) note that text simplification changes can be lexical (rare words replaced by more common ones) and syntactic (complex structures are split, reordered, or deleted). Tomuro (2003) notes that paraphrasing questions is more difficult as the interrogative structure is separate from the declarative, and can have many variations. They quantified paraphrasing operations and showed that interrogative reformation accounted for 50% of changes, followed by lexical substitution (25%) and semantic changes (16%). Recent work on sentence rewriting has followed this direction, by breaking down reformulation into predefined editing operations (Choi et al., 2021). Our work is inspired by all of the above, but differs in several ways. We expand on the known issues in QA by analyzing real voice assistant data to identify prevalent challenges to tackle; we consider malformed question correction as a prerequisite for dealing with challenges of complex questions. Additionally, prior rewriting approaches aim to improve QA via retraining, or by building a rewriter tailored to a single QA system. We take a different approach that does not rely on answer data or QA system feedback, and build a general model that can benefit multiple QA systems in a federated architecture. Instead of uncontrolled paraphrasing, we deal with question complexity via controllable reformulations that distinguish between lexical modification, interrogative clause restructuring, and semantic changes. We propose novel linguistic restructuring operations to deal with complex syntax, and generalize high-specificity elements. ## 3 Challenges In Real-World Spoken Qa First, to quantify and understand why spoken QA fails, we perform a failure analysis on 10 million real questions, by further distinguishing questions according to their *question type* (we define 5 types based on linguistic properties, see Appendix B for details) from a leading voice assistant. Scope: we limit our work to questions that were not answered due to retrieval failure, but may potentially have relevant answers if reformulated. They must be valid questions (seek knowable knowledge) whose information need can be understood (by humans) and re-stated. QA may fail for other reasons; we do not consider such issues e.g., *inter alia*, ASR errors, invalid or difficult to understand questions, subjectivity, and other reasons for retrieval failure. A quantitative and qualitative study was undertaken by domain experts (details in Appendix A), and identified the below challenges (C1-7) as contributing to a significant proportion2 of failed requests, and potentially solvable by reformulation. C1. Malformed Utterances: Questions with disfluencies and syntactic errors were more likely to fail e.g., Fig. 1 (Q1). Correction methods have previously been used to fix these (Gupta et al., 2021). C2. Lexical Gaps Questions framed colloquially or lacking appropriate parlance for a topic e.g., Fig. 1 (Q2), were associated with failure. This is caused by lexical gaps (Riezler and Liu, 2010) arising from language mismatch between the user input and answer sources, as QA systems use formal knowledge sources for retrieval. Lexical substitution and rephrasing may address this challenge. C3. Complex Syntactic Structure: Utterances with complex structure, such as multi-clause questions, can lead to QA failure. Such phrasing is more common in spoken language, and can be simplified via syntactic restructuring, e.g., Fig. 1 (Q3-5). C4. Polar Propositions: Yes-No questions are asked to confirm a specific proposition, e.g., "Do box turtles live in Japan?". Answering polar questions is more difficult than wh-questions for both humans (Moradlou et al., 2021) and QA systems (Clark et al., 2019), due to the entailment and inferences required to arrive at an answer. This can be simplified by reformulating to a factoid whquestion, e.g., "*Where do box turtles live?*". 2The exact numbers cannot be divulged for confidentiality. C5. False Presuppositions: Additionally, polar questions may contain false presuppositions that cause retrieval failure (Kim et al., 2021). We hypothesize rewriting such questions to wh-questions may retrieve relevant answers, e.g., Fig. 1 (Q6). C6. High Specificity: Highly specific questions (concerning very specific entities, or conditions) may not be answerable. We believe generalizing such questions by entity modification or constraint relaxation (Fig. 1 Q7-9) can broaden answer recall. C7. Irrelevant Info: Related to C3 and C6, complex and high-specificity questions may contain contextual facts that are irrelevant to the answer. We believe removing such details can improve answer recall (Fig. 1 Q4/5/7). ## 4 Surf **Question Reformulation Model** We now describe our proposed Semantic Question Reformulation model (SURF) and the reformulation operators that it supports. ## 4.1 Reformulation Model Inspired by controllable multi-task learning for text generation (Keskar et al., 2019; Raffel et al., 2020), we train a single model to perform different reformulations. Our reformulation model, F(*p, q*), represents a seq2seq Transformer model (Lewis et al., 2020), and is trained such that for an *input* question q and a *target* reformulation operator p ∈ {REP, ROO, GEN}, pre-pended as a prefix to q, it reformulates q into q′according to p. Model Training. F is trained in two stages: the first stage pretrains F using a large *weaklysupervised* corpus (derived by a heuristic proposed in §5.3) of ⟨*q, q*′⟩ for the REP and ROO operations. In the second stage, we finetune F on manually annotated pairs of ⟨*q, q*′⟩ for all operators in p. ## 4.2 Reformulation Operators Each prefix p instructs F to perform a specific type of reformulation. We define the following prefix operators based on the challenges presented in §3. Question Repair (REP): To address challenges C1-2, REP removes disfluencies, performs syntactic correction, and increases formality via lexical substitution with high-entropy words. For example, the input "*Where can I get a booze after 11 pm?*" is repaired to "*Which stores sell beer after 11 pm?*". Root Wh- Transform (ROO): Outlined in challenges **C3-6**, questions with complex structure are more difficult, but may be answered if simplified to factoid questions. ROO reformulates q such that the interrogative *wh-** phrase is clause initial (at the root of the sentence), making needed syntactic adjustments. For example, "*Do any universities* in Germany offer degree programs taught in English?" → "Which universities in Germany offer degree programs in English?". This also handles contextualized multi-clause questions, e.g., "*I am* chopping onions for a pizza dinner how fine should they be" → "*how fine should onions be for pizza*". Question Generalization (GEN): To deal with highly specific questions, covered in challenge C4, we propose a novel question generalization operation. Inspired by similar approaches to improve recall in structured query languages (Motro, 1984) and IR (Boldi et al., 2011), we simplify questions through the removal or relaxation of semantic constraints. Creating a more general question allows the retrieval of a superset of results, which in many cases provides a highly related answer that may be better than no answer. GEN does this by dropping adjuncts, replacing nouns with hypernyms or holonyms, and removing adjectives. For example, the question "*Do poisonous pythons live in Miami?*" can be generalized to "*Do snakes live in Florida?*". Note that "*python*" and "*Miami*" are turned into more generic entities "*snake*" and "*Florida*", and at the same time the aspect "*poisonous*" is dropped. In ROO and GEN, REP is always performed jointly with the respective operators. The output of all operators should not contain any syntactic or semantic errors present in the original question. ## 5 Experimental Setup 5.1 Intrinsic Evaluation Strategy Using a human study, we intrinsically evaluate the reformulation accuracy3to assess if: (1) the reformulation *retains* the intent of the input question; and (2) the reformulation *satisfies* the properties of the reformulation operator p. Evaluation Data: For each question type, we randomly sampled 50 questions from each reformulation operator. This data is then assessed by expert annotators, resulting in 1, 000 annotations. 3Note that due to the unreliability of automated metrics such as BLEU and ROUGE, we opt for human evaluation and answering metrics in our intrinsic and extrinsic evaluations. ## 5.2 Extrinsic Evaluation Strategy For the extrinsic evaluation, we assess the impact of the reformulated questions on two aspects: Answer Rate: measured as the percentage of reformulations that obtain an answer. Answer Relevance: a three-point scale measuring the answer relevance to the original question (obtained from the reformulated questions): **Irrelevant** (0): answer is not related to q; **Related** (1): answer is partially relevant;4and **Exact** (2): answer exactly satisfies question's information need. Evaluation Data: For the two aspects we measure, we consider the following evaluation datasets: - **Answer Rate:** We randomly sampled 1M *unanswered* questions by our QA system (see Appendix F for additional details). - **Answer Relevance:** on the same questions used for intrinsic evaluation, the annotators also check the answer relevance w.r.t the original question. ## 5.3 Training Data Pre-training Data. We create a weakly-supervised dataset of 1.2M samples, derived from the MQR corpus (Chu et al., 2020), which provides tuples of ill-formed and well-formed questions (c.f. §D). To construct input tuples ⟨*p, q*⟩ for pre-training F, from a target question q′ we derive p as follows. First, using the algorithm in Appendix B, we identify the question types of q and q′. If q and q′ have the same type, then p = REP. If q′is a *root* question and q is not, then p = ROO. The GEN operator is novel to our work and cannot be automatically derived, and is part of the fine-tuning dataset. Fine-Tuning Data. We sampled 3, 851 questions and annotated reformulations, based on guidelines listed in §E, for all operators in §4.2. We use 10% of annotated data for validation; the rest is used during the second stage of training to fine-tune F. ## 5.4 Reformulation Model Configurations SURF: At inference time, our model5can do different reformulations based on p. We analyze the impact on question answerability from different reformulation operators REP, ROO and GEN. Additionally, we analyze the combination of ROO and GEN, i.e., q is first reformulated by ROO, then the resulting q′is reformulated by GEN, denoted as 4e.g., it answers the question's intent but the subjects may be different, as may be the case of entities in GEN. 5Appendix C shows training and hyperparameter details. ROO+GEN. Note that the operators ROO, GEN, ROO+GEN, all perform a REP operation as well (see Section 4.2 for details). Baseline: As a baseline model we consider an ablation of SURF without its pre-training stage, assessing its performance on the same four operators. O**PTIMAL**: We consider the case where q is answered if any of its reformulations p ∈ {REP, ROO, GEN} obtains an answer. OPTIMAL represents the upper bound performance of the QA system.6 ## 6 Results And Discussion We now turn to a discussion of the results for the intrinsic (accuracy) and *extrinsic* (answer rate and relevance) evaluation strategies. | Task | Baseline | SURF | |------------------|------------|--------| | No reformulation | 0.00% | 0.00% | | REP | 8.10% | 9.41% | | ROO | 9.26% | 13.18% | | GEN | 13.29% | 14.34% | | ROO+GEN | 14.50% | 17.92% | | OPTIMAL | 18.48% | 24.15% | ![4_image_0.png](4_image_0.png) ## 6.1 Intrinsic Evaluation | Task | Accuracy | Answer Relevance | | | |------------|------------|--------------------|------------|------------| | Irrelevant | Related | Exact | | | | ROO+GEN | 77% | 26% (4.6%) | 25% (4.5%) | 48% (8.6%) | | GEN | 83% | 29% (4.2%) | 22% (3.2%) | 49% (7.0%) | | ROO | 73% | 36% (4.7%) | 18% (2.4%) | 46% (6.1%) | | REP | 80% | 26% (2.5%) | 15% (1.4%) | 59% (5.5%) | Table 1 shows the human evaluation results for reformulation accuracy. The best accuracy is achieved for GEN, with 83% of the reformulations being accurate. This is because GEN does not require changing the question type like ROO. REP achieves second best accuracy. One reason for the slightly lower accuracy than GEN, is that it sometimes changes the question type (e.g. request to root), which goes beyond the REP's reformulation scope. Although according to our intrinsic evaluation strategy such cases represent inaccurate reformulation, in practice this is benign as QA systems perform very well on root factoid questions. Finally, we note that reformulations significantly shorten the input questions and result in higher type-token ratio (Appendix H). We list many examples of model input/output pairs in Appendix I. 6Live deployment latency requirements prohibit producing all possible reformulations and running through a QA system; therefore we try to determine the best single operator p offline. ## 6.2 Extrinsic Answer Rate Results Table 2 shows results for all reformulation tasks and models. OPTIMAL represents the case where for an input question at least one reformulation operator gets answered by the QA system. Pretraining yields consistent improvement in all tasks. Our large weakly supervised ⟨*q, q*′⟩ data enables learning the REP and ROO operations, leading to an answer rate improvement for SURF-ROO with 13.18% over the Baseline of 9.26% (a 3.92% absolute improvement). Figure 2 shows a breakdown of the impact of the operators by question type. Impact of Speech Errors: the REP operation, which performs correction and makes question more formal, shows a consistent answer rate improvement across all tasks and models, improving it by 9.4%. This demonstrated that for many questions speech errors and framing cause retrieval failure. In Figure 2 we note that REP provides a consistent improvement across all question types. This improvement is intuitive given that a core component of QA systems is their ability to understand questions before answering, hence any speech or syntactic errors negatively impact answering. Impact of Root Transformation: the ROO operation repairs and reformulates the question to its root form. It shows better performance than REP, although it may change the original question type. For SURF, the improvement of ROO over REP are with 3.77%, contrary for baseline where the improvement is only 1.16%. This further highlights the importance of the pre-training stage for SURF. Figure 2 shows that for all question types, reframing them as root questions significantly improves the answer rate. ROO is the most effective operator for polar questions, as they are particularly are hard to answer (§3). For example, "Is Sherlock Holmes a real person?" can also be answered via the alternative question "*Who is Sherlock Holmes?*". Impact of Generalization: the GEN operation repairs and generalizes the original question to be less specific. For SURF, GEN obtains 4.93% absolute improvement over REP in terms of answer rate, similar is improvement for the baseline with 5.19% (cf. Table 2). As we show in §6.3, most of the provided answers to the generalized questions are in fact relevant to the original question's intent. Impact of Joint Reshaping and Generalization: ROO+GEN achieves the best performance across all tasks. This is intuitive as questions are first corrected for possible errors, then converted into a root wh- structure, after which high specificity elements are dropped to construct a more generic question (cf. Figure 1, Q7,Q8, Q9). SURF-ROO+GEN only has an 8% gap to the OPTIMAL performance. Figure 2 shows that for all question types, ROO+GEN obtains the highest improvement in answer rate. Comparing the answer rates of ROO+GEN and OPTIMAL we make an interesting observation: although ROO+GEN combines all operators in p, its answer rate is still lower than OPTIMAL. This shows that applying all operators is not desirable for all questions. However, in practical settings, processing questions separately with all operators is not feasible due to the induced generation and QA latency. Hence, our proposed solution represents a trade-off between deployment feasibility and improvement in answer rate. ## 6.3 Answer Relevance Results It is important to consider if the provided answers to previously unanswered questions are relevant to the user's information need. Since SURF performs numerous syntactic and semantic changes, there is a risk that the reformulated questions will result in answers that are not related to the user's intent. Table 1 shows the answer relevance results for the different operators based on a human study where answers are assessed for their relevance to q. REP has the highest exact relevance with 59% (cf. Table 1), but in absolute terms as shown in Table 2 it obtains the lowest answer rate increase of 9.41%. The other operators are more complex and more likely to change the intent, the answer relevance is shifted towards *related* and *irrelevant* answers. For instance, ROO and GEN have the highest irrelevant answers, with 36% and 29%, respectively. This is intuitive given that the scope of the original question is reduced in q′, which can lead to unrelated answers. On the other hand, we observe that ROO+GEN has the most answers that are related to q, with 73% based on the human annotations, or 13.1% on the 1M test set (extrapolated results). It also obtains the least irrelevant answers as well as the highest answer rate, which we speculate is because the root wh- transformation and generalization reduce answering complexity and broaden recall, leading to a better pool of candidate answers for the QA system. Furthermore, the different operators are complementary (cf. Appendix G), hence, their combination achieves the best result. ## 6.4 Live Qa Deployment The SURF-ROO model7 was deployed for real-time reformulation of unanswered questions in a leading voice assistant. This live deployment enables answering for millions of previously unanswered requests. Each day we solicit explicit binary relevance feedback from a portion of customers receiving answers of SURF reformulations, with metrics exceeding or matching those reported in Table 1. ## 7 Conclusion We tackled the problem of improving spoken QA, and analyzed questions from live data to identify key challenges that could be addressed with reformulation. Based on this we proposed SURF with novel linguistically-motivated reformulation operators to solve the identified challenges. Offline experiments show the effectiveness of our novel root transformation and generalization operations, with up to 24% of unanswered questions being answered via reformulations with high answer relevance. Live deployment in a leading voice assistant has positively impacted millions of requests. 7We chose the best single operation model due to the doubled latency of ROO+GEN. We showed reformulation helps QA systems adapt to spoken user questions. We presented key insights from a deployed solution showing that performance can be significantly increased, without changing the underlying QA backends, by simply improving questions in their syntax and semantics. ## Limitations And Future Work In this work we did not consider the following aspects, which we discuss below and lay out directions for how to address them in future work. Combining Reformulation Operations: The reformulation operators, except REP, which is applied jointly with other operators, are applied sequentially, in their given order, e.g. ROO+GEN. This has two potential limitations that we aim to address in future work. First, applying multiple operators sequentially has the negative impact of increased inference latency as the SURF model needs to be applied multiple times, which can become a bottleneck for systems that process large traffic volumes. Second, by applying sequentially the reformulation operators, the likelihood of cascading errors or the model making mistakes in terms of the target reformulation shape increases. We aim to address this limitation in the future by fine-tuning the model to jointly perform multiple reformulation operators in a single pass. Large Language Models (LLM): In this work we relied on BART (Lewis et al., 2020) as our seq2seq model, and did not experiment with newer multi-billion parameter LLMs. Recently we have seen rapid progress in the space of LLMs, both in terms of model size and their capabilities to perform various tasks (Chung et al., 2022). However, we note that deploying LLMs is limited by their high inference latency, particularly in high-traffic, low-latency systems such as ours. Furthermore, for experimenting with API-based approaches such as ChatGPT and GPT-4, using these systems was not possible due to data confidentiality. While we will explore leveraging LLMs for this task in the future, current experimental results show that even smaller language models such as BART, with a sufficient amount of training data, can be fine-tuned to perform the task accurately. Evaluation on Public Datasets: Our evaluation focused on real-world unanswered user utterances from voice assistants. We did not use public datasets as currently available resources do not accurately represent customer behavior at scale. However, the community is aware of this divergence, and there are initial efforts in different NLP tasks to create public datasets that represent real-world user behavior. For example, in the the task of Named Entity Recognition there has been recent work on bridging the gap between academic datasets and real-world problems by creating new resources that represent contemporary challenges that are encountered in practice (Fetahu et al., 2023; Malmasi et al., 2022). In future work we will consider evaluating SURF on such datasets as they become available. Furthermore, the findings from our work may be used to create data that includes the challenges we identified as part of our analysis (either by organically collecting such data, or simulating it to generate synthetic data). Multilingual Experiments: We only considered English-language questions in this work, and it will be of interest to consider how our approach can be extended to other languages using multilingual models. The evaluation of cross-lingual transfer for this task is another open research area. ## Acknowledgments We would like to thank the following people for their valuable assistance and feedback: Danielle Class, Eugene Agichtein, Jim Fakonas, Leela Prabhu, Mohamed Nasreldin, and Zhiji Liu. ## References Paolo Boldi, Francesco Bonchi, Carlos Castillo, and Sebastiano Vigna. 2011. Query reformulation mining: models, patterns, and applications. Information retrieval, 14:257–289. Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2018. Ask the right questions: Active question reformulation with reinforcement learning. In *International Conference on Learning Representations*. Matthew Byrd and Shashank Srivastava. 2022. Predicting difficulty and discrimination of natural language questions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 119–130, Dublin, Ireland. Association for Computational Linguistics. Wallace Chafe and Deborah Tannen. 1987. The relation between written and spoken language. Annual review of anthropology, 16(1):383–407. Yllias Chali and Sadid A. Hasan. 2012. Simple or complex? classifying questions by answering complexity. In *Proceedings of the Workshop on Question Answering for Complex Domains*, pages 1–10, Mumbai, India. The COLING 2012 Organizing Committee. Zhiyu Chen, Jie Zhao, Anjie Fang, Besnik Fetahu, Oleg Rokhlenko, and Shervin Malmasi. 2022. Reinforced question rewriting for conversational question answering. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 357–370, Abu Dhabi, UAE. Association for Computational Linguistics. Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. Transactions of the Association for Computational Linguistics, 9:447–461. Zewei Chu, Mingda Chen, Jing Chen, Miaosen Wang, Kevin Gimpel, Manaal Faruqui, and Xiance Si. 2020. How to ask better questions? a large-scale multidomain dataset for rewriting ill-formed questions. In Proc. of AAAI. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2924–2936. Association for Computational Linguistics. Matthias Eck, Stephan Vogel, and Alex Waibel. 2005. Low cost portability for statistical machine translation based on n-gram frequency and TF-IDF. In Proceedings of the Second International Workshop on Spoken Language Translation, Pittsburgh, Pennsylvania, USA. Majda Ennaciri. 2022. Searching for explanation of difficult scientific terms. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open Question Answering Over Curated and Extracted Knowledge Bases. In KDD. Besnik Fetahu, Sudipta Kar, Zhiyu Chen, Oleg Rokhlenko, and Shervin Malmasi. 2023. Semeval2023 task 2: Fine-grained multilingual named entity recognition (multiconer 2). *CoRR*, abs/2305.06586. Wee Chung Gan and Hwee Tou Ng. 2019. Improving the robustness of question answering systems to question paraphrasing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6065–6075, Florence, Italy. Association for Computational Linguistics. Aditya Gupta, Jiacheng Xu, Shyam Upadhyay, Diyi Yang, and Manaal Faruqui. 2021. Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3309–3319. Yunlong He, Jiliang Tang, Hua Ouyang, Changsung Kang, Dawei Yin, and Yi Chang. 2016. Learning to rewrite queries. In *Proceedings of the 25th ACM International on Conference on Information and Knowledge Management*, pages 1443–1452. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. *arXiv preprint arXiv:1909.05858*. Najoung Kim, Ellie Pavlick, Burcu Karagol Ayan, and Deepak Ramachandran. 2021. Which linguist invented the lightbulb? presupposition verification for question-answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3932–3945, Online. Association for Computational Linguistics. Wei-Jen Ko, Te-yuan Chen, Yiyan Huang, Greg Durrett, and Junyi Jessy Li. 2020. Inquisitive question generation for high level text comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6544–6555, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, and Philip S. Yu. 2019. Generative question refinement with deep reinforcement learning in retrieval-based qa system. In *Proceedings of the 28th ACM International Conference on Information and Knowledge Management*, CIKM '19, page 1643–1652, New York, NY, USA. Association for Computing Machinery. Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. Multiconer: A largescale multilingual dataset for complex named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 3798–3809. International Committee on Computational Linguistics. Louis Martin, Éric de la Clergerie, Benoît Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689– 4698, Marseille, France. European Language Resources Association. Sabrina J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975–4989, Florence, Italy. Association for Computational Linguistics. Sara Moradlou, Xiaobei Zheng, TIAN Ye, and Jonathan Ginzburg. 2021. Wh-questions are understood before polar-questions: Evidence from english, german, and chinese. *Journal of Child Language*, 48(1):157–183. Amihai Motro. 1984. Query generalization: A method for interpreting null answers. In Expert Database Workshop, pages 597–616. Islam Nassar, Michelle Ananda-Rajah, and Gholamreza Haffari. 2019. Neural versus non-neural text simplification: A case study. In *Proceedings of the* The 17th Annual Workshop of the Australasian Language Technology Association, pages 172–177, Sydney, Australia. Australasian Language Technology Association. Jeffrey Pomerantz. 2005. A linguistic analysis of question taxonomies. *Journal of the American Society for* Information Science and Technology, 56(7):715–728. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Stefan Riezler and Yi Liu. 2010. Query rewriting using monolingual statistical machine translation. *Computational Linguistics*, 36(3):569–582. Elizabeth Salesky, Matthias Sperber, and Alex Waibel. 2019. Fluent Translations from Disfluent Speech in End-to-End Speech Translation. In Proceedings of NAACL, pages 2786–2792. Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified textto-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341–352, Aberdeen, Scotland, UK. Association for Computational Linguistics. Elizabeth Shriberg. 2005. Spontaneous speech: How people really talk and why engineers should care. In Ninth European Conference on Speech Communication and Technology. Noriko Tomuro. 2003. Interrogative reformulation patterns and acquisition of question paraphrases. In Proceedings of the Second International Workshop on Paraphrasing, pages 33–40, Sapporo, Japan. Association for Computational Linguistics. Wayne Ward. 1989. Understanding spontaneous speech. In Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Hai Ye, Hwee Tou Ng, and Wenjuan Han. 2022. On the Robustness of Question Rewriting Systems to Questions of Varying Hardness. In *Proceedings of the* 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2100–2113. ## Appendix A Spoken Qa Failure Analysis We analyzed data to understand prevalent challenges in spoken QA failure. Unlike prior work, which uses Machine Reading Comprehension (MRC) datasets like SQuAD, we leverage real questions from a leading voice assistant.8 We performed a quantitative analysis, taking two large random samples of answered and unanswered user queries, totalling 10 million unique questions. For all questions, we compute several sentence-level variables (length, type-token ratio, TF-IDF) which are predictive of language complexity (Mielke et al., 2019; Byrd and Srivastava, 2022; Ennaciri, 2022), and measure their correlation with whether the question was answered. We also hypothesize that the question's linguistic shape is important (see §2). Following prior work (Pomerantz, 2005), we define a syntactic question typology (Table 3) and develop an accurate type classification heuristic (c.f. Appendix B for details). Table 4 shows the results. For confidentiality, we only report correlations and relative differences between the answered and unanswered groups, whose sizes cannot be disclosed. We note that longer questions and those with higher specificity (i.e., IDF) are more likely to be unanswered. Higher TTR (i.e., fewer repeated tokens) results in higher answer rates, likely because repetition is associated 8We do not consider ASR challenges in this work, and only deal with text transcripts. Table 3: A list of the types in our question typology. | Root wh-question | The wh-phrase is clause-initial. ("Who is the US president?". "How large is an elephant?") | |--------------------|----------------------------------------------------------------------------------------------| | Polar (Yes-No) | Asks if a statement is true. (e.g. "Is it going to rain tomorrow?", "Can cats eat onions?") | | Open | Open-ended how questions. (e.g. "How does depression affect the body?") | | Request | Direct request beginning with a verb. (e.g. "Tell me the capital of Utah.") | | Other | Any other utterance. (e.g. "watermelon health benefits", "sports softball in Denver") | Table 4: Top: Pearson correlation between question characteristics and if it was answered (all p < 0.001). Bottom: Distributional differences in question types between the unanswered and answered questions. | Variable | Pearson Correlation (r) | |------------------------|---------------------------| | Token Length | −0.25 | | Char Length | −0.24 | | Type-Token Ratio (TTR) | +0.12 | | Mean of IDF scores | −0.13 | | Sum of IDF scores | −0.30 | | Sum of TF-IDF scores | −0.31 | | Mean of TF-IDF scores | −0.12 | | Question Type | Difference w/ Answered | | Root | −10.4% | | Polar | +8.5% | | Open | +2.4% | | Request | −2.1% | | Other | +1.6% | with disfluencies. Question type also has a big impact on answerability. Simple root wh-questions are less prevalent in the answered subset, while polar questions are much more frequent in the unanswered subset. ## B Question Type Classification We develop a rule-based algorithm to classify a question into a predefined type (cf. Table 3). Algorithm 1 shows our heuristic to determine the question type. The algorithm is a rule-based and applied in cascade, until there is a match between question and type. The evaluation order is the same as listed in Table 3, from top to bottom. - **Root**: a question which starts with a *wh-** or some specific *how-* bigrams. - **Polar**: A yes or no question starting with predefined keywords. - **Open**: Start with how, but not a root question. - **Request**: They are a command to a QA system and start with a verb.9 - **Other**: If anything else, sentences are labeled as other. Below are listed some of the input variables necessary for Algorithm 1. - **wh-* or how-* bigrams**: "*what*", "*where*", "*when*", "*which*", "who", "why", "*how much*", "*how many*", "*how long*", "*how old*", "*how early*", "*how soon*", "*how wealthy*", "*how rich*", "how big", "*how small*", "*how tall*", "*how short*", "how heavy", "*how often*", "*how late*", "*how far*", "*how high*", "*how fast*", "*how quickly*", "how close"; - **polar keywords**: "do", "*does*", "did", "can", "was", "*were*", "*should*", "is ", "isn", "has", "*have*", "are", "*aren*", "*will*"; ## Algorithm 1 Heuristic For Question Types Require: sentence s if s starts with wh-* or how-* bigrams **then** type ← root else if s starts with a keyword from polar keywords list **then** type ← polar else if s starts with "how" **then** type ← open else if s starts with verb **then** type ← request else type ← other end if return type ## B.1 Heuristic Accuracy Evaluation To evaluate the accuracy of our heuristic algorithm, we randomly sampled 100 questions from each question type from the testing set and annotated whether the classified question type is correct. In total, 500 questions were annotated and the overall accuracy is 95%. The accuracy of each question type is summarized in Table 5. 9We use spaCy for POS-Tagging. | Question Type | Accuracy (%) | |-----------------|----------------| | Root | 0.98 | | Polar | 0.98 | | Open | 0.95 | | Request | 0.89 | | Other | 0.95 | | Average | 0.95 | Table 5: Question classification heuristic accuracy (based on human assessment), for each question type. | Original Question Type | ROO | REP | |--------------------------|---------|-----------| | Root | 0 | 544,440 | | Polar | 23,201 | 200,942 | | Open | 36,322 | 257,704 | | Request | 15,405 | 450 | | Other | 113,568 | 9,348 | | Total | 188,496 | 1,012,884 | Table 6: Distributions of question types in the weakly supervised pre-training data for the ROO and REP operators. ## C Model Implementation Details For both our approach and the baseline, we adopt BART (Lewis et al., 2020) 10 as our reformulation model F. As annotating the GEN task is not possible for all questions (as not all of them are generalizable, e.g., "*Who is Joe Biden?*"), this results in a smaller amount of training data for the GEN task. To address this, we upsampled the generalized reformulations by 5x during training so that the number of generalization samples matches other types of reformulations. We train it for up to 20 epochs with a learning rate of lr = 1e − 6 and use Adam as our optimizer, and batch size of 16. The training is halted using early stopping, if the validation loss is non-decreasing after 3 epochs. ## D Pre-Training Data To prepare the weakly-supervised data for pretraining, we first apply our question type heuristic from Appendix B to classify the original questions and reformulations in the MQR dataset.11 We then automatically derive operator task labels from those question types using the method described in §5.3. This process yields 1.2M samples. Table 6 shows 10https://huggingface.co/facebook/bart-base 11https://github.com/ZeweiChu/MQR the distribution of task labels and question types in the data. As noted earlier, data for the GEN operator cannot be reliably derived with weak supervision on this dataset. The large majority of the data contains repairs, as that is the intended purpose of the MQR dataset. Table 7 and Table 8 list some example questions from the MQR dataset with their assigned question types for the REP and ROO operators, respectively. ## E Annotation Guidelines Here we describe in detail the question reformulation annotation guidelines. First, the steps for each reformulation operator are described, then a general overview of annotation guidelines for the entire annotation process is shown. ## E.1 Instructions For Rep Rep Reformulations Must: - not contain repetitions, false starts, and self corrections. - be grammatically correct. For example, "Is Bill Pullman have a son?" → "Does Bill Pullman have a son?". - be impersonal and formal. For example, "Where can I get a booze after 11 pm?" → "Which stores sell beer after 11 pm?". - keep the original question type (e.g., root → root). ## E.2 Instructions For Roo ROO question reformulations must satisfy the following constraints: - The reformulation must be a root question as defined in Appendix B. For example, "*Is there* any easy way to make money online?" → "What is the easiest way to earn money online?". - Reformulations must retain the intent of the original question. In the above example, the question type is changed from polar to root. However, the answer to the reformulated root question can still provide an answer to the original polar question. Reformulations where the intent is changed are invalid: "Can you freeze chicken that's already been thawed?" → "*How long can chicken be* frozen for before going bad?". - The reformulation additionally should satisfy the REP constraints, with the exception of altering the question type. | Question | Source Type | Reformulation | Target Type | |-------------------------------------------------------------------|---------------|---------------------------------------------------------|---------------| | "where does spider live in?" | root | "where does a spider live?" | root | | "what is the oridgin of the word mosque?" | root | "where does the word mosque come from?" | root | | "how remember pronunciation of danish words?" | open | "how can i remember the pronunciation of danish words?" | root | | "how can we make money from youtube?" | open | "how do people earn money from youtube?" | root | | "does the grammar generates the words?" | polar | "does the grammar generate the words?" | polar | | "can charity claim patent on medicine?" | polar | "can charities be granted patents on medicine?" | polar | | "winners in olympic in 2000?" | other | "names of olympic winners of 2008?" | other | | "at what tempature does alcohol freeze?" | other | "at what temperture does alcohol freeze?" | other | | "find out some advantages for setting up a partnership?" | request | "give 2 advantages of a business partnership?" | request | | "name three groups of polymers and name one type of a composite?" | request | "name three common polymers?" | request | Table 7: REP examples of weakly-labeled pre-training data from the MQR dataset as labeled by our heuristics. Question Source Type Reformulation Target Type "how do you know if your local bike club is worth paying for?" open "*what benefits do bike clubs provide?*" root "*how do you forgive other people?*" open "*what's the best way to forgive people?*" root "*an example of enzyme mimic is ?*" other "*what are examples of enzymes and antibodies ?*" root "basic difference between compilers and interpreters?" other "what are the differences between compilers and interpreters?" root "*explain the ending of batman arkham city to me*" request "*what happens in the ending of batman arkham city?*" root "*unscrewing sliding window lock*" request "what tool works with this star-shaped screw with a post in the middle?" root "*do blackholes exist?*" polar "*why do black holes exist?*" root "*is there any easy way to make money online?*" polar "*what is the easiest way to earn money from online?*" root "*are there any good challenging puzzles?*" polar "*what are some good word puzzles?*" root Table 8: ROO examples of weakly-labeled pre-training data from the MQR dataset as labeled by our heuristics. ## E.3 Instructions For Gen GEN reformulations may slightly change the information that is sought in the original question to something more general. This can be done by removing parts of a question (adjuncts or other clauses), and modifying referenced entities. Note that we do not make parallel entity changes (e.g. "Los Angeles" → "*San Francisco*"), but rather perform vertical generalization (e.g., with hypernyms or holonyms "Los Angeles" → "*California*"). There are different cases to generalize a question: - The reformulation is less restricted than the original question w.r.t some entity (e.g., "What do pythons eat?" → "*What do snakes eat?*"); - The reformulation is more general than the original question regarding conditions/constraints (e.g., "*Who is the tallest person in the USA?*" → "*Who is the tallest person?*"). For any given question, multiple distinct generalizations may be possible. ## E.4 Overall Guidelines For Annotators You will be given questions and asked to generalize them or reshape them into other types. All your reformulations must be done with respect to the original question. An original question can be generalized up to 3 times. Please complete the following steps for each question: - Question Validity (prior to any reformulation): 1. Judge whether the question seeks a valid answer. A question is invalid if you are unable to understand the question's intent. Or, alternatively, you judge that the question is unanswerable. This may be the case for personal questions (e.g. "*do I have COVID?*"). If the question is invalid, remove question from the training dataset. - Perform REP reformulation: 1. Refer to §E.1 to make sure your reformulation adheres to all REP constraints. - Do ROO reformulation: 1. Refer to §E.2 to make sure your reformulation adheres to all constraints. 2. If it is unfeasible to make the reformulation without changing the question's intent, leave blank. 3. Do not reformulate root questions. - Do GEN reformulation: 1. Write down up to 3 generalized reformulation of the original question. If possible, try to perform different types of generalization. 2. Refer to §E.3 to make sure your reformulation adheres to all constraints. ## E.5 Sampling Strategy To sample questions for annotation, we first filter questions with fewer than 5 tokens or more than 13 tokens. Then we adopt the unseen strategy (Eck et al., 2005) using bi-grams to select questions that cover diverse topics. For each question, we collect up to 3 different generalized reformulations, given that a question can be generalized in different ways. ## F Extrinsic Evaluation Data To evaluate the performance of reformulations on our QA system, we take a representative sample of 1M unanswered questions from the real traffic as the test set, where the distribution across different question types is kept to the real traffic distribution. However, due to confidentially reasons, we cannot reveal the exact question type distribution. ## G Operator Contingency Tables A natural question is whether different operators are correlated, i.e., they lead to improved answering on the same set of questions, or if they are complementary/orthogonal by improving non-overlapping subsets of questions. To understand this relationship we performed a cross tabulation analysis by building 2x2 contingency tables comparing different operators on our test set. Each operator is represented by a binary variable indicating whether the reformulation by that operator resulted in the unanswered question becoming answered. Table 9 shows the results of this analysis for the SURF model. We observe that there is substantial degree of orthogonality between the operators, as evidenced by cases where one operator fails and the other succeeds, e.g., ROO can improve answering on 6.98% of the data where REP fails to do so. The largest correlation is between ROO and ROO+GEN, while the lowest is between ROO and REP. All operators are best complemented by ROO+GEN. The trend is inline with the results shown in Table 2, where ROO+GEN has the highest number of answered reformulated questions. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) ## H Analysis Of Reformulations Changes We also consider how our reformulation operators change the original questions in terms of length and type-token ratio (TTR). Previously, in Table 4 of Appendix A we showed that these question characteristics are correlated with answer rate. As a follow up, we examined how the SURF reformulations change these variables. Figure 3 shows that SURF reformulations from all operators significantly shorten the input questions, indicating that they result in simplified questions. The micro-averaged length reduction across all question types for each operator is 9.9% for REP, 4.7% for ROO, 15.5% for GEN, and 12.6% for ROO+GEN. The average length of a question reformulation by ROO increases only for open and request question types, while it decreases in all other cases. However, for open and request question types, ROO makes the question more specific (e.g., , "*explain how to play football*" is reformulated into "*what is the best way to play football?*" by ROO). Sometimes, ROO also makes polar ques- REP = 0 REP = 1 ROO+GEN = 0 ROO+GEN = 1 GEN = 0 83.80% 1.87% GEN = 0 77.81% 7.85% GEN = 1 6.79% 7.54% GEN = 1 4.27% 10.07% ROO = 0 83.61% 3.21% ROO = 0 80.28% 6.55% ROO = 1 6.98% 6.20% ROO = 1 1.80% 11.38% GEN = 0 GEN = 1 ROO+GEN = 0 ROO+GEN = 1 ROO = 0 80.27% 6.55% REP = 0 79.25% 11.34% ROO = 1 5.39% 7.79% REP = 1 2.83% 6.58% tion more specific (e.g., , "*is aspartame safe?*" → "*what are the health risks of eating aspartame?*"). Regarding TTR, from Figure 4 we observe a consistent TTR increase for all reformulation types. This is consistent with the findings from Table 4, where it was shown that TTR is positively correlated with answer rate. The increase of TTR means the reformulated questions are less likely to have repeated tokens. ## I Question Reformulation Examples We show synthetic examples of question reformulations output by our model in Table 10. | Task | Origin | Reformulation | |------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|---------------------------------------------------------| | REP | "why why are the dinosaurs go away from them" | "why did dinosaurs go away?" | | REP | "how do tiger how do baby tigers speak" | "how to communicate with baby tigers?" | | REP | "korean food new jersey" | "what is the best korean food in new jersey" | | REP | "hey find me some vinegar potato salad recipes" | "how to make vinegar potato salad" | | REP | "are bill pullman have a son" | "does bill pullman have a son" | | REP | "what time of the year do all the lightning bugs come out" | "when do lightning bugs come out" | | REP | "when is new movie about minions" | "when will the next movie about minions be released" | | REP | "how much does it how long does it take to boil one and a half cups of | "how long does it take to boil water" | | water" | | | | REP | "what's the macros of rotisserie chicken" | "what are the macros of a rotisserie chicken" | | ROO | "composer dona zetti" | "who is the composer dona zetti" | | ROO | "does a lot of sugar will hurt your teeth" | "why does sugar hurt your teeth" | | ROO | "hey find me vinegar potato salad recipes" | "what are some good vinegar potato salad recipes?" | | ROO | "look up some macron recipes" | "what are some macron recipes" | | ROO | "hey show me a marinade for pepper steak" | "what marinade should be used for pepper steak" | | ROO | "are bill pullman have a son" | "who is bill pullman's son" | | ROO | "in an air fryer at what temperatures do i heat fully cooked bratwurst | "how long should bratwurst be cooked in an air fryer" | | and for how long" | | | | ROO | "yes did steven spielberg go to films school" | "where did steven spielberg go to film school" | | ROO | "how do you do reminder for my phone" | "how to set up a reminder for a phone?" | | ROO | "again show me the recipe for pan seared steak" | "what are some recipes for pan seared steak" | | GEN | "how long can fleas live without host" | "how long can fleas live" | | GEN | "in 1901 who won the first nobel prize for physics" | "who won the first nobel prize for physics" | | GEN | "are large feet assigned beauty some cultures" | "are large feet assigned beauty" | | GEN | "did kamala harris ever move to canada than back to america" | "did kamala harris move to canada" | | GEN | "how many playable characters are in lego star wars the skywalker saga" | "how many playable characters are in lego star wars" | | GEN | "tell me what is the total number of people who ever lived on earth" | "how many people have ever lived on earth" | | GEN | "price of a wheel for a two thousand five volvo s eighty" | "what is the price of a wheel for a car" | | GEN | "is it good to put a watermelon that is not been peeled in the ice box" | "is it good to put watermelon in the ice box" | | GEN | "how is it another way to defeat hurricane" | "how to defeat a hurricane" | | ROO+GEN | "in an air fryer at what temperatures do i heat fully cooked bratwurst | "how long should bratwurst be cooked" | | and for how long" | | | | ROO+GEN | "in 1901 who won the first nobel prize for physics" | "who won the first nobel prize" | | ROO+GEN | "what is the average salary of a taxi driver from san francisco" | "how much does a taxi driver earn in california" | | ROO+GEN | "in feet how long was the largest shark ever on the record" | "what is the longest shark" | | ROO+GEN | "who is the actor's name who plays eleven in stranger things" | "who is the actor that plays eleven in stranger things" | | ROO+GEN | "what is a apple or lemon that starts with letter f." | "what is a fruit that start with f " | | ROO+GEN | "does hair grow faster when it is warm or cold outside" | "how fast does hair grow" | | ROO+GEN | "can you tell us some facts about patrick mahomes achievements" | "what are facts about patrick mahomes" | | ROO+GEN | "if i workout for about ten minutes everyday how many days will it take | "how long does it take to lose weight?" | | for me to lose a pound" | | | | Table 10: Synthetic examples of reformulated questions according to the different reformulation operators. | | |
parikh-etal-2023-exploring
Exploring Zero and Few-shot Techniques for Intent Classification
https://aclanthology.org/2023.acl-industry.71
Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the cold-start problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient fine-tuning using T-few recipe on Flan-T5 yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive.
# Exploring Zero And Few-Shot Techniques For Intent Classification Soham Parikh, Prashil Tumbade, **Quaizar Vohra**, and **Mitul Tiwari** ## Servicenow, Inc. {soham.parikh,prashil.tumbade,quaizar.vohra,mitul.tiwari}@servicenow.com ## Abstract Conversational NLU providers often need to scale to thousands of intent-classification models where new customers often face the coldstart problem. Scaling to so many customers puts a constraint on storage space as well. In this paper, we explore four different zero and few-shot intent classification approaches with this low-resource constraint: 1) domain adaptation, 2) data augmentation, 3) zero-shot intent classification using descriptions large language models (LLMs), and 4) parameter-efficient fine-tuning of instruction-finetuned language models. Our results show that all these approaches are effective to different degrees in low-resource settings. Parameter-efficient finetuning using T-few recipe (Liu et al., 2022) on Flan-T5 (Chung et al., 2022) yields the best performance even with just one sample per intent. We also show that the zero-shot method of prompting LLMs using intent descriptions is also very competitive. ## 1 Introduction Intent classification is the primary natural language understanding task for a virtual agent or a chatbot. Providing intent-utterances for training intent classification models is a laborious process. In this paper, we address this problem by exploring zero and few-shot intent identification using Large Language Models (LLMs) as well as instruction finetuned models. Zero-shot and few-shot intent prediction completely remove or substantially reduce the work to provide intent-utterances, respectively. We demonstrate that the following four approaches work well in practice for zero/few-shot intent classification. - **Domain adaptation** We use a sentence encoder that is pre-trained with our domain knowledge and show that it performs well in a few-shot setting compared to off-the-shelf sentence encoders. - **Data Augmentation** By supplementing human-curated training data with LLMgenerated data to improve training data. - **Zero-shot intent classification** High capacity LLMs can be prompted creatively with intent descriptions to do zero-shot classification. - **Parameter-efficient fine-tuning (PEFT)** Finetuning a small number of parameters added to instruction finetuned LMs using only a few examples Here is the outline of the rest of the paper. In Section 2 we describe the related work. In Section 3 we detail the datasets used. In Section 4 we describe the four approaches covered in this work for zero/few-shot intent classification. Finally, we conclude with observations in Sections 5 and 6. ## 2 Related Work Recent work has successfully used domain adaptation and contrastive learning for few-shot intent classification. One approach is to use embeddings from a BERT model (Devlin et al., 2019) pretrained on domain data to search for utterances belonging to new intents in the domain (Yu et al., 2021). In a similar vein, (Zhang et al., 2021) finetune a BERT model on few-shot data using contrastive learning which learns to discriminate between semantically similar sentences. Our work on domain adaptation differs from these mainly due to our setting which involves serving thousands of customers. For legal reasons, we cannot co-mingle data from these customers to pre-train a single model. Instead, we pre-train a sentence encoder based on an intent taxonomy and out-of-the-box intents, which consist of human generated synthetic data. In this setting, we can only train very lightweight models for each customer, e.g. a dense layer on top of a pre-trained sentence encoder. Data Augmentation is another widely used technique to solve the problem of data scarcity. Recent 744 | Dataset | Intents | Train Size | Test Size | OOS Samples in Test | |---------------|-----------|--------------|-------------|-----------------------| | MASSIVE | 60 | 11514 | 2974 | No | | OOTB-dataset* | 27 | 1363 | 3099 | No | | Benchmark01* | 9 | 270 | 300 | Yes | | Benchmark02* | 13 | 390 | 420 | Yes | | Benchmark03* | 31 | 930 | 960 | Yes | ![1_image_1.png](1_image_1.png) work on data augmentation has focused on using multiple methods to improve model performance (Chen and Yin, 2022). LLMs like GPT-3 (Brown et al., 2020) can be prompted to generate labeled training data for intent classification (Sahu et al., 2022). The quality of generated training data using LLMs is highly dependent on the prompts. In this work, we show various prompt-based approaches that generate diverse data for training and boost the performance of intent classifiers. As the usage of conversational agents grows, it is important for them to generalize to new intents. Recent work has focused on performing zero-shot intent detection on unseen intents and domains. Using knowledge from ontologies or attributes (Ferreira et al., 2015; Yazdani and Henderson, 2015) can help in detecting and generalizing to new intents. A more recent approach by (Liu et al., 2019) makes modifications to capsule networks to generalize to unseen domains. Embeddings of intent descriptions have also shown to be quite meaningful in generalizing to new intents and services (Ma et al., 2019). While these methods are effective, they all require training on an initial set of intents. Large Language Models (LLMs) like GPT-3 (Brown et al., 2020) and more recently instruction finetuned models like (Chung et al., 2022) have shown good zero-shot performance on newly seen tasks without any prior training data on those tasks. In this work, we show that these models are also effective for zero-shot intent classification using just intent descriptions. ## 3 Datasets We use public and private intent classification datasets to benchmark different approaches. For evaluation on public dataset, we use the English train and test sets from MASSIVE for intent classification. MASSIVE contains utterances directed at a physical device spanning 60 intents and 18 domains. For more details on the MASSIVE dataset (FitzGerald et al., 2022), we encourage readers to refer to their paper. We also use private benchmarking datasets internal to our company. These ![1_image_0.png](1_image_0.png) datasets contain various intents and utterances in the enterprise setting spanning 3 different domains: IT Service Management (ITSM), HR and Customer Service Management (CSM). The utterances are inspired by interactions between humans and chatbots and are typically queries from goal-oriented conversations where the user needs to resolve an issue. Additionally, some of these datasets also contain out-of-scope (OOS) utterances in their test set i.e. utterances that do not belong to any intent, in order to benchmark irrelevance detection of intent classification models. Table 1 shows statistics for different datasets used in our benchmarking. ## 4 Methodology In this section, we describe the various methods we evaluate for zero and few-shot learning. ## 4.1 Domain Adaptation Domain and task-specific pre-training of language model (Gururangan et al., 2020) has shown to significantly improve classification accuracy in both low and high resource settings. Techniques like contrastive learning (Gao et al., 2021) (Feng et al., 2022) are effective for improving the quality of sentence encoders, specifically in low-resource settings. Inspired by these ideas, we use a sentence encoder trained on our domain-specific data along with public datasets. Starting with the LaBSE checkpoint (Feng et al., 2022) we train it further by converting intent classification, paraphrasing, etc, as sentence similarity tasks. We will refer to this model as ELMSE (enterprise language model based sentence encoder). For training intent-classification models, we freeze ELMSE weights and use its sentence embeddings as features for a trainable non-linear dense layer for classification. We compare ELMSE against other publicly available sentence encoders, namely LaBSE, Multilingual Universal Sentence | Dataset | Intent Names | Utterance | |--------------------------|-------------------------------------------------|----------------------------------| | UpdateChangeRequest | I could I update CHG1234567 | | | TroubleshootSlowComputer | My laptop is taking too long to load | | | SubmitARequest | I need a new phone | | | IdentifyScheduledChanges | What are the upcoming scheduled changes | | | CreateProblem | report new critical problem | | | OOTB - dataset* | GuestWifiAccess | How do I get in the guest wifi | | IdentifyScheduledChanges | Can you pull up the list of scheduled changes | | | MyAssignedEquipment | Show me my devices list | | | SearchKnowledgeBase | I want information on policies | | | RepositoryAccess | How can I access the shared drive | | | Benchmark01 - dataset* | EscalateITTicket | increase priority of my incident | | LocalAdminAccess | Can I get authorization as local admin on my pc | | | RSAToken | RSA login is not working | | | EmailSetup | How do I configure outlook on my device | | | BillingInvoiceIssue | I was billed twice but have no account | | | Benchmark02 - dataset* | SubmitARequest | I request a new computer | | RSAToken | I have problem with authentication code | | | CreateChangeRequest | I want to request a change | | | LocalAdminAccess | How can I login as local admin | | | Feedback | I have bad experience | | | Benchmark03 - dataset* | | | Table 2: Few samples of intents and their respective utterances from the private internal benchmarking datasets. 3 LaBSE 46 (1.7) 59 (2.9) 52 (2.7) 58 (3.1) MUSE3 53 (2.8) 64 (3.8) 62 (2.7) 64 (1.3) GTR-3b **59 (1.4)** 76 (1.4) 70 (3.3) **78 (2.2)** ELMSE 57 (2.3) **77 (2.4)** 63 (4.6) 74 (1.7) | Few-shot K | model | Massive | Benchmark01 | Benchmark02 | Benchmark03 | |--------------|----------|-----------|---------------|---------------|---------------| | LaBSE | 46 (1.7) | 59 (2.9) | 52 (2.7) | 58 (3.1) | | | MUSE3 | 53 (2.8) | 64 (3.8) | 62 (2.7) | 64 (1.3) | | | GTR-3b | 59 (1.4) | 76 (1.4) | 70 (3.3) | 78 (2.2) | | | ELMSE | 57 (2.3) | 77 (2.4) | 63 (4.6) | 74 (1.7) | | | LaBSE | 58 (1.7) | 65 (3.3) | 59 (1.7) | 67 (1.8) | | | MUSE3 | 61 (0.9) | 70 (2.2) | 66 (1.4) | 70 (1.7) | | | GTR-3b | 66 (1.2) | 78 (1.0) | 73 (1.7) | 84 (1.0) | | | ELMSE | 63 (1.1) | 80 (1.7) | 67 (2.6) | 79 (1.2) | | ![2_image_0.png](2_image_0.png) 5 LaBSE 58 (1.7) 65 (3.3) 59 (1.7) 67 (1.8) MUSE3 61 (0.9) 70 (2.2) 66 (1.4) 70 (1.7) GTR-3b **66 (1.2)** 78 (1.0) 73 (1.7) **84 (1.0)** ELMSE 63 (1.1) **80 (1.7)** 67 (2.6) 79 (1.2) Encoder (MUSE) (Yang et al., 2020) and GTR-3B. ELMSE is comparable in size to LaBSE and MUSE while almost 30 times smaller than GTR-3b. We simulate few-shot setting by randomly selecting K utterances per intent from full datasets. We use K=3,5,8,10,15,20 as well as the full dataset. We report results on 4 datasets from Table 1. Since OOTB-dataset was used for pretraining ELMSE, we exclude it from few-shot evaluation. ## 4.1.1 Results For Domain Adaptation Table 3 reports in-scope accuracy and standard deviation averaged of 5 random seeds for 3-shot and 5-shot classification. The results demonstrate that domain adaptation is a very effective approach with improvements of greater than 5 percent in most cases when compared with models of similar size. These results carry over as we increase the number of few-shot utterances to more than 5 as shown in Figure 1. The plots also show that the gap between ELMSE and LaBSE is much larger in a fewshot setting and reduces as K increases. Moreover, ELMSE is only 2-3% worse than GTR-3b which is 30 times larger model. ## 4.2 Data Augmentation We use data augmentation to generate labeled data for training starting with a seed set of 5 utterances per intent. In this section, we explore different ways of prompting GPT-3 and T5 (Raffel et al., 2020). For evaluating the generated utterances, we use them for training the same type of lightweight classifier as described in 4.1 using ELMSE as the sentence encoder. This section describes different prompt-based approaches for data generation. GPT-3 + Paraphrase : Following (Sahu et al., 2022), we ask GPT-3 to generate 20 paraphrases of utterances from the same intent taken from the seed set. To encourage diverse generations, we set high temperature and top_p values. GPT-3 + Intent Descriptions : We describe intents in the prompt and ask GPT-3 to generate 20 utterances for a particular intent. We find that describing all intents prevents hallucinations in the generations. Parrot T5 Paraphrasing : We use the Parrot Paraphrase approach based on T5 (Damodaran, 2021) to generate 20 diverse paraphrased utterances given seed set. Table 4 shows a few generations from our prompt-based approaches. ## 4.2.1 Experimental Setup And Results To evaluate the quality of generated utterances, we use them to train intent classifiers. We evaluate the performance of augmented dataset from each approach as mentioned in Section 4.2.1 by training ELMSE classifier model for intent classification task. We evaluate on 4 datasets and compared against ELMSE few-shot baseline where K is set to 5. We report the in-scope accuracy and standard deviation averaged over 3 different random seeds. Table 5 shows the result for all approaches using the data augmentation. Unless mentioned explicitly, we do not add the seed set to the training mix. We find that using paraphrases from GPT-3 and Parrot T5 Paraphraser give better results compared to ELMSE Baseline even without the seed set. GPT3 Augmentations using Intent Descriptions does not perform well but when combined with ELMSE Baseline seed set gives better results. Moreover, given a good quality seed-set, we see that data augmentation using LLMs can boost the performance of intent classification in few-shot setting. ## 4.3 Prompting Zero-Shot Prediction ![3_image_0.png](3_image_0.png) We use intent names and descriptions for prompting language models to perform zero-shot prediction. The intent descriptions are prefaced with instructions to predict the correct intent and the test utterance is specified at the end. The output is expected to be the correct intent label. Figure 4.3 shows an example prompt from the MASSIVE dataset and the output from LLMs. For evaluation, we check for the presence of intent names in the LLM completion text as opposed to an exact match and report the first intent predicted in the completion. This is done to account for hallucinations. If no intent names are present in the completion text, we mark it as an "out-of-scope prediction". We create intent descriptions for 4 datasets: 3 internal benchmarking datasets and the open-source MASSIVE dataset. We benchmark 2 language models using this type of prompt: GPT-3 (175B parameters) and Flan-T5-XXL (11B parameters), an instruction fine-tuned model. Filtering Intents for LLMs Many conversational agents have a lot of intents, sometimes more than 50. As the number of intents increases, the prompt size increases which incurs higher latency and cost (in case of token-based pricing models). To restrict the length of the prompt, we use sentence similarity to retrieve the top-5 intents and only use those 5 intents in the language model prompt. Using sentence similarity needs a few training examples which makes this a few-shot approach. With just 5 examples per intent, we get more than 0.85 recall for Top-5. To restrict the size of the prompt, we do not pass the training examples to the LLMs. Setup For GPT-3, we set the temperature to 0 and max tokens for generation to 20. We use the | Approach | Prompt | Generated Text | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Task: Create diverse utterances by paraphrasing the following utterances: schedule alarm to wake me up after 3 hours alarm for ten am wake me up on friday at five in the morning i need to catch the train alarm me at eight am please set alarm for today Create 20 utterances: | Set an alarm for 10 o'clock. Wake me up on Friday at 5am so I can make the train Set a timer to wake me up in three hours | | | GPT3 Paraphrase Aug. | A virtual assistant serves multiple intents. Below are the description of the intents: alarm_set: user wants to set an alarm iot_cleaning: user wants to do some cleaning . . . play_podcasts: user wants to play a podcast or rewind/repeat a particular episode in a podcast Generate 20 utterances for alarm_set intent: | Can you set an alarm for next week? I need to set an alarm for a specific time I want to set an alarm for a certain day . . . I'd like to set an alarm for a certain hour | | GPT3 Aug. Using Intent Descr. | | | Table 4: Example prompts used in generating text for the corresponding approaches | Approach | MASSIVE | Benchmark01 | Benchmark02 | Benchmark03 | |-----------------------------------------|-----------|---------------|---------------|---------------| | ELMSE Baseline | 63 (1.1) | 80 (1.7) | 67 (2.6) | 79 (1.2) | | GPT-3 w/ Paraphrase Aug. | 63 (0.5) | 84 (0.4) | 71 (0.3) | 81 (0.5) | | GPT-3 w/ Intent Descriptions | 51 (0.5) | 76 (0.4) | 69 (0.5) | 76 (0.2) | | Parrot T5 | 58 (0.4) | 81 (0.2) | 73 (0.4) | 81 (0.4) | | Seed Set + GPT-3 w/ Intent Descriptions | 63 (0.8) | 84 (0.4) | 71 (0.3) | 78 (0.9) | | Seed Set + Parrot T5 | 63 (0.6) | 79 (0.4) | 68 (2.2) | 76 (0.6) | Table 5: Results for Data Augmentation on 3 internal datasets along with MASSIVE comparing the performance on multiple prompt-based approaches. We report the average in-scope accuracy and standard deviation averaged over 3 different random seeds | Dataset | LLM Intents | Model | In-Scope Accuracy | Out-of-scope Recall | |---------------------------|---------------|-------------|---------------------|-----------------------| | 5 | Flan-T5-XXL | 68.6 | - | | | GPT-3 | 69.2 | - | | | | 60 | Flan-T5-XXL | 73.3 | - | | | GPT-3 | 73.9 | - | | | | MASSIVE (60 intents) | 5 | Flan-T5-XXL | 83.7 | - | | GPT-3 | 83.4 | - | | | | 27 | Flan-T5-XXL | 86.3 | - | | | GPT-3 | 84.9 | - | | | | OOTB-dataset (27 intents) | 5 | Flan-T5-XXL | 86.5 | 0.43 | | GPT-3 | 84.6 | 0.97 | | | | 9 | Flan-T5-XXL | 86.5 | 0.48 | | | GPT-3 | 89.3 | 0.67 | | | | Benchmark01 (9 intents) | 5 | Flan-T5-XXL | 69.7 | 0.65 | | GPT-3 | 60.6 | 0.87 | | | | 13 | Flan-T5-XXL | 69 | 0.7 | | | GPT-3 | 61.3 | 0.67 | | | | Benchmark02 (13 intents) | | | | | default setting generation settings for the Flan-T5- XXL model and do not restrict the number of tokens to be generated. The results with filtering are averaged over 3 runs using different random seeds for sampling the 5 samples per intent. Results Table 6 reports the accuracy for in-scope intents and the recall for out-of-scope samples where applicable (samples that do not belong to any of the intents in the dataset). We find that prompting language models with intent descriptions for zero-shot intent classification performs better than few-shot learning using a classifier (Tables 3 and 5). Since this only needs intent descriptions, this approach can generalize to new intents as well. Using the same prompt, Flan-T5-XXL is competitive with GPT-3 in terms of in-scope accuracy and is often better when presented a smaller number of intents in the prompt. While the in-scope accuracy is comparable, GPT-3 clearly outperforms Flan-T5- XXL in terms of the out-of-scope recall, indicating ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) that it is better at detecting irrelevant samples. We attribute the strong performance of Flan-T5-XXL (even though it is 16x smaller) to the multi-task instruction finetuning on over 1800 datasets. For the 3 internal datasets, we also find that using more intents in the prompt works better only up to a certain extent but have excluded the results for the brevity of this paper. While the intent retrieval method does not give perfect Top-5 recall, it helps in keeping the prompt short and hence provides lesser chances for the language models to give a output a wrong label name. Moreover, filtering can also improve the out-of-scope recall as in the case of Benchmark02 dataset. ## 4.4 Parameter-Efficient Finetuning (Peft) Taking inspiration from the T-Few recipe (Liu et al., 2022), we add and finetune IA3 adapters from scratch in Flan-T5 models in a few-shot setting which is similar to 4.1. We pick K=1,3,5 utterances per intent. Since the Flan-T5 models are instruction fine-tuned, we use the same prompt from 4.3 and provide the intent name as the target string. For MASSIVE and OOTB-dataset, we restrict the number of intents in the prompt to 15 at training time to prevent out-of-memory exceptions. At inference time, we provide all intents in the prompt. We use all 3 loss functions (language modeling, unlikelihood and length normalized losses) and the same hyperparameters as mentioned in the T-Few paper. For more details about the T-Few recipe, we encourage readers to refer to their paper. Figure 2 compares the results of PEFT against the best results from previously described methods. Flan-T5-XL (3B parameters) consistently outperforms all other methods with just 1 training example per intent. With a few more examples, FlanT5-Large (770M parameters) also outperforms all other methods except Flan-T5-XXL on the OOTB dataset. This shows that we can train significantly smaller models which are easier to deploy and also outperform LLMs like GPT-3 with just a few parameters using intent descriptions and a handful of examples. ## 5 Observations Comparing results across the 4 approaches, we notice that all 4 approaches are effective in low resource settings. We find that domain adaptation is a cheap option in terms of size of the models but it still requires 5-10 training utterances per intent for getting accuracy above 70%. Data Augmentation using paraphrasing further helps in most cases by 2-4 percentage points. However, expanding to new domains requires sentence-pairs data for training the sentence encoder which can involve days of human labeling. Zero shot classification using intent descriptions with LLMs and instruction finetuned models performs even better than domain adaptation with data augmentation and doesn't require any utterances to be configured per intent. However a good description for each intent is required. Additionally, these models can be expensive to operationalize. Inference on Flan-T5-XXL requires using A100 GPUs. GPT-3 is not open-source and based on a pricing model which can be expensive to scale to thousands of customers. Parameter efficient fine-tuning (PEFT) of instruction finetuned models like Flan-T5-XL and Flan-T5-Large offers the best performance across all methods and often by a large margin. Moreover, these models are only a fraction of the size of GPT-3 and Flan-T5-XXL and much easier to operationalize at scale with far lesser compute resources. ## 6 Conclusion In this paper, we addressed the task of zero and few-shot intent identification using Large Language Models (LLMs). We presented four approaches, namely domain adaptation, data augmentation, zero-shot prediction with prompting, and parameter-efficient fine-tuning. Our experimental results demonstrate that LLMs and larger instruction fine-tuned language models are very effective in zero-shot setting with in-context prompting. Smaller instruction finetuned models with adapters are even better when adapter-finetuned on just 1 or 3 examples per intent. We hope these results are useful for practical deployment of conversational agents in low-resource settings as well as aiding non-practitioners in building their intent classification models. In the future, we plan to extend this work by domain adapting smaller instruction finetuned models in a multi-task setting and exploring their zero-shot capabilities. ## References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, volume 33, pages 1877–1901. Curran Associates, Inc. Derek Chen and Claire Yin. 2022. Data augmentation for intent classification. *CoRR*, abs/2206.05790. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. CoRR, abs/2210.11416. Prithiviraj Damodaran. 2021. Parrot: Paraphrase generation for nlu. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891, Dublin, Ireland. Association for Computational Linguistics. Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lefèvre. 2015. Online adaptative zero-shot learning spoken language understanding using wordembedding. In *2015 IEEE International Conference on Acoustics, Speech and Signal Processing* (ICASSP), pages 5321–5325. Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gökhan Tür, and Prem Natarajan. 2022. MASSIVE: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages. *CoRR*, abs/2204.08582. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In *Proceedings of the 2021 Conference* on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Suchin Gururangan, Ana Marasovic, Swabha ´ Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342–8360, Online. Association for Computational Linguistics. Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y.S. Lam. 2019. Reconstructing capsule networks for zero-shot intent classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4799–4809, Hong Kong, China. Association for Computational Linguistics. Haokun Liu, Derek Tam, Muqeeth Mohammed, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In *Advances* in Neural Information Processing Systems. Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiying Yang, Xiaoyuan Yao, Kaijie Zhou, and Jianping Shen. 2019. An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification. *CoRR*, abs/1912.09297. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67. Gaurav Sahu, Pau Rodriguez, Issam Laradji, Parmida Atighehchian, David Vazquez, and Dzmitry Bahdanau. 2022. Data augmentation for intent classification with off-the-shelf large language models. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 47–57, Dublin, Ireland. Association for Computational Linguistics. Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernández Ábrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2020. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 87–94. Association for Computational Linguistics. Majid Yazdani and James Henderson. 2015. A model of zero-shot learning of spoken language understanding. In *Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing*, pages 244–249, Lisbon, Portugal. Association for Computational Linguistics. Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, and Qi Li. 2021. Few-shot intent classification and slot filling with retrieved examples. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 734–749, Online. Association for Computational Linguistics. Jianguo Zhang, Trung Bui, Seunghyun Yoon, Xiang Chen, Zhiwei Liu, Congying Xia, Quan Hung Tran, Walter Chang, and Philip Yu. 2021. Few-shot intent detection via contrastive pre-training and fine-tuning. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1906–1912, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
bhargava-etal-2023-referring
Referring to Screen Texts with Voice Assistants
https://aclanthology.org/2023.acl-industry.72
Voice assistants help users make phone calls, send messages, create events, navigate and do a lot more. However assistants have limited capacity to understand their users{'} context. In this work, we aim to take a step in this direction. Our work dives into a new experience for users to refer to phone numbers, addresses, email addresses, urls, and dates on their phone screens. We focus on reference understanding, which is particularly interesting when, similar to visual grounding, there are multiple similar texts on screen. We collect a dataset and propose a lightweight general purpose model for this novel experience. Since consuming pixels directly is expensive, our system is designed to rely only on text extracted from the UI. Our model is modular, offering flexibility, better interpretability and efficient run time memory.
# Referring To Screen Texts With Voice Assistants Shruti Bhargava, Anand Dhoot, Ing-Marie Jonsson, Hoang Long Nguyen, Alkesh Patel, Hong Yu, **Vincent Renkens** Apple Inc. {shruti_bhargava, adhoot, ingmarie, romanhoangnguyen_long, alkesh.patel, hong_yu, vrenkens}@apple.com ## Abstract Voice assistants help users make phone calls, send messages, create events, navigate and do a lot more. However assistants have limited capacity to understand their users' context. In this work, we aim to take a step in this direction. Our work dives into a new experience for users to refer to phone numbers, addresses, email addresses, urls, and dates on their phone screens. Our focus lies in reference understanding, which becomes particularly interesting when multiple similar texts are present on screen, similar to visual grounding. We collect a dataset and propose a lightweight general purpose model for this novel experience. Due to the high cost of consuming pixels directly, our system is designed to rely on the extracted text from the UI. Our model is modular, thus offering flexibility, improved interpretability, and efficient runtime memory utilization. ## 1 Introduction With the advent of internet and smartphones, the world came to our fingertips. And with the emergence of voice assistants (VAs), everything became even more accessible. VAs have become pervasive in the smartphones as they offer natural means of communication to the user. They able a user to perform tasks faster with natural language instead of several taps, app switches, scrolls and typing. However, they are limited in their ability to understand the user's context. Let us look at an example. In Fig. 1, a user wants to share a number from a webpage to a friend. They might do either of the following: - memorize the number → go to messages → new message to friend → type the number from memory → send - select the number → copy → go to messages → new message to friend → paste → send One solution might be that the user can read out the number to the VA. However reading out may be cumbersome and unnatural as this is not how one would communicate with a person standing next to them. Further, it may create unwarranted ASR errors, especially for texts like URLs or emails. Our work explores how to make this simpler by enabling users to refer to screen elements in requests made to voice assistants. References make conversations more natural and succinct, thus allowing the user to say: "Send the middle number to Tim". We conduct a user study to explore how users would make requests involving screen elements. Participants are shown screenshots, each containing multiple entities of a category (eg. 3 phone numbers), and asked to type requests for a VA to act on one of them. The study reveals that a majority of users (57%) prefer to use references like "Send that office number to Tim' instead of repeating the full text. For supporting such experiences, voice assistants need to resolve the references. In this work, we focus on such reference resolution. Specifically, we consider requests referring to phone numbers, addresses, email addresses, URLs, date/time. We ![0_image_0.png](0_image_0.png) 752 choose these *actionable text* since, ∼50% of screen elements are texts (Zhang et al., 2021), and these categories are commonly acted upon. Users often call or message numbers, share contact details, navigate to addresses. To understand and evaluate the task, we collect ScreenRef, a dataset of 14k requests, with references to *actionable text* or entities. ScreenRef contains two collections. First, *Descriptive Data*, which is based off screens with multiple similar entities to get descriptive references like 'call the Apple Business Manager number'. This is similar to how visual grounding datasets (Kazemzadeh et al., 2014; Mao et al., 2016) focus on images with multiple objects of the same category to get challenging references. Second, *Category level Data*, includes simpler references to a category without disambiguating within a category, eg. 'call this number'. These are described in Sec. 4. With the purpose of deploying in real world, it is critical to design solutions with low latency. To this end, we design Screen Reference Resolver, or SRR, a modular attention based architecture for reference understanding. We focus on privacy, hence our model is lightweight and executable on-device. Our network re-uses existing signals available from upstream including request embedding, and text scraped from the UI. We also discuss a heuristicbased baseline (designed for quick prototyping). Overall, our main contributions are: 1. We explore a novel experience for Voice Assistant users to execute tasks on actionable text on the phone screen by using references. 2. We conduct a user study to analyse users' interactions with entities on screen. This reveals interesting insights about usage of references. 3. We design efficient data collection schemes for collecting requests with references to *actionable text* on screen and collect a dataset. 4. To understand references to entities on screen, we propose a heuristic-based baseline and a modular attention-based network, SRR. The model has a small memory footprint, low latency, can run on device, and drastically boosts performance compared to the baseline. ## 2 Related Work Grounding to UI elements. Past works have explored mapping natural language commands to UI elements for Chrome web pages (Pasupat et al., 2018), grounding executable actions for UI navigation (Li et al., 2020) and user interaction (Xu et al., 2021). These works primarily focus on navigational commands, thus target buttons, links and input boxes. Our goal is to explore screen referencing capability for common VA tasks, thus we target 'actionable text entities' like phone numbers. Hsiao et al. (2022) propose ScreenQA with questions about UI elements including text, which could also benefit from UI grounding. Wang et al. (2022); Rozanova et al. (2021) investigate LLM abilities for UI grounding. Li et al. (2021); Li and Li (2022) use vision and language transformers for the task. However, we only use the screen texts and no pixels directly. Our solution design focuses on low latency, less memory and privacy-preserved inference that can be run on device. ## Voice Assistants And Multimodal Interactions. The power of replacing multiple low-level actions by natural language has been explored for webpage designing (Kim et al., 2022), image editing (Laput et al., 2013). Users use VAs for controlling screen content, particularly the visually challenged (Vtyurina et al., 2019). Ljungholm; Luger and Sellen (2016) discuss how lack of context understanding makes VA usage unnatural. Bolt (1980) employed a point-and-speak approach for desktops. Prior works have explored tracking user gaze for multimodal interactions (Drewes et al., 2007), for digital screens (Hutchinson et al., 1989; Mardanbegi and Hansen, 2011) as well as for external, real-world objects (Mayer et al., 2020). In this work, we explore using language to reduce the lowlevel actions needed to interact with certain text categories on phone screens and thereby increase the context understanding of VAs. Grounding to objects and text in open scenes. A related task to ours is visual grounding (Kazemzadeh et al., 2014; Mao et al., 2016; Yu et al., 2018), resolving references to physical objects in scenes. The physical form and semantics of text is much different, resulting in different reference forms. Rong et al. (2019, 2017) look at references to text in scenes. However, a lot of their references are of the form 'the text on . . . ', thus grounding requires less knowledge of text and more of physical objects. Also, the major challenge in open scenes is text localisation and recognition, which is much simpler on phone screens. On the other hand, screens are challenging as they contain a lot more text. TextVQA, from Singh et al. (2019); ![2_image_1.png](2_image_1.png) Biten et al. (2019) could utilise grounding to text, but doesn't contain labels for this. Lastly, none of these works cater to task-oriented dialog for text on screen, which is our primary focus. ## 3 User Study To understand how users would make requests about entities on the screen, we conduct a user study on Pollfish (pol). We use 4 screens for each actionable text category (phone numbers, emails, addresses, and URLs), and each screen has 3 instances of the category (eg. 4 screens with 3 phone numbers each). A total of 300 participants are selected from across US, balanced for VA usage, gender and English as first or other language. Overall, 4800 typed requests were collected. The responses were reviewed by two researchers. Using heuristics, three common types of requests surfaced: 1. Full Text: "call 1-866-902-7144" 2. Labels: using text other than full entity text "directions to the one in Portland" 3. Ordinals: "send the third email address". The data shows a heavy preference towards the first two (Fig. 2). Intuitively, when browsing information, the eyes are often scanning for a topic of interest. For instance, "I need to call support", explaining the label based requests. Our hypothesis for the high use of full text is that they didn't want to rely on VA's ability to understand the context. Within references, using the text in or around the entity is common and the position is used sometimes. Note that our study was performed on a limited set of users and for a limited set of screens, but we uncover interesting patterns on how users might request actions on screen texts. It is important to keep in mind that speaking full texts could be cumbersome, unnatural and have speech recognition errors, especially for entities like URLs. ![2_image_0.png](2_image_0.png) ## 4 Task And Dataset Given a screen S (with OCR texts t), text entities e1*, ..., e*k and a request r, the task is to select the entity(entities) e ∈ e1*, ..., e*k being referred to in r. We collect ScreenRef, a collection of requests to Voice Assistants with references to actionable text categories on screen (phone number, address, email address, URL, date/time). Due to privacy concerns on sharing a dataset with extracted phone numbers/emails from web pages, we are unable to share the dataset but we discuss the collection protocol in detail (see samples in Fig. 4 and annotation guidelines in Appendix A). We collect full requests, not just reference phrases, since words outside of the explicit reference phrase may hint at the targeted entity. For instance, *call this* has the reference *this* which is ambiguous, however the request can be understood as referring to a phone number. 20 annotators are recruited for our data collections. We first started with a simple collection protocol. After extracting entities of our interest using data detectors for a list of web pages, we show a web page screen with one highlighted entity and ask graders to provide a request referring to that entity. This would get us a dataset of requests referring to screen entities and their referred entity. However, this ran into major issues. First, annotators would often miss other similar entities on the screen and ![3_image_1.png](3_image_1.png) provide requests which are ambiguous, eg. "call this number", when there is more than 1 number on the screen and only 1 of them is highlighted, thereby resulting in an incorrect sample for the resolution task. Second, there were a large number of duplicate requests (>40%). This may happen due to several screens may have one entity and thereby annotators may use simple references. Other issues included the need of screens with entities to collect any data, lack of representation of different reference types within the collected data and lack of awareness of ambiguous requests. The quality and efficiency concerns led us to develop a new protocol in the form of descriptive and category level data collections. Within descriptive collection, we use a similar screen based collection technique. However we restrict to screens with more than one instance of a category in order to collect challenging and diverse requests (similar to visual grounding datasets like RefCOCO). Alongside the target entity, we highlight all entities of that category to reduce chances of erroneous ambiguous requests. Within category level collection, we do not use screens and the focus is on unique diverse requests with simple references. This split addresses the issues described above leading to more efficient collection and better quality datasets. Descriptive Data Collection. Though we aim to support references on all apps on phone, this collection is carried out with web pages due to their varied layouts and ease of access. For a list of top visited web pages, we extract texts by UI scraping ![3_image_0.png](3_image_0.png) and get text categories using data detectors. This is similar to running object detectors in open scenes. In order to get challenging references, we only keep screens with more than one entity from a category (eg. 2 URLs). One entity on the screen is highlighted as target and users are asked to provide requests for that (Fig. 7b). Guidelines provided in A.2. For quality check, we run a verification to confirm the requests are unambiguous: three independent annotators are shown the screen and the collected request and asked which entity from the screen is the request referring to. Samples where at least 2/3 annotators agree are kept, leading to ∼6% data drop. Category-level Data Collection. This collection targets *simple references* for a category ("phone number - Call that number", "URL - Open it"). During screen mining, we observe that a lot of the screens have only one entity from a category. In these cases, users may prefer succinct simple references instead of descriptive ones. Note that these references do not use the screen layout. Hence, we design this collection independent of screens. This gives a simpler collection scheme that allows us to scale to new categories and/or locales more quickly, with reduced time and cost. We show a category and ask annotators to give requests, assuming that entity is on their screen. The collection is carried out on shared spreadsheets, one sheet per category (Fig. 7a) in order to avoid duplicate requests across annotators. Annotators are given automatic instant feedback by COUNT_UNIQUE to encourage variations. Through pilot annotation projects, we recognize several constraints to ensure that the uniqueness is not from spurious modifications, which are also added to the guidelines. Detailed grading guidelines provided in A.1. For verification, 3 independent annotators are shown a request and asked to mark all categories it could refer to. This also gives annotated multi-label samples i.e. requests that are category ambiguous: "take me there" could be referring to a URL or an *address*. Requests with majority agreement are kept. After the requests are collected, dummy entities, one of each scoped category, are added to each request to form a data sample. In a way, this makes the dataset more complete and challenging than real screens which may include only a subset of the entity categories. (Fig. 4). ![4_image_0.png](4_image_0.png) ## 5 Models 5.1 Heuristic-Based Baseline This is designed for quick prototyping and development without much training data. We define a set of hand-crafted rules using keywords from a subset of the training data. The rules are applied in sequence: 1. **Phrase-match.** Look for synonyms or verbs or apps in the request that indicate the target category (like 'number', 'call' indicate *phone* number, 'navigate', 'maps' indicate *address*). 2. **Location-match.** Regex match to find positional or ordinal reference in request, sort candidates by coordinates and pick the entity at the mentioned position. 3. **Label-match.** Locate the text on screen that has maximum match to the request using a set of string matching features like word overlap (after removing stopwords). Pick the entity closest to this text. 4. If none of the entities are selected above (like "Share this"), score all entities identically. ## 5.2 Screen Reference Resolver We design SRR, a modular attention-based network for resolving references (Fig. 6). Inspired by MattNet (Yu et al., 2018), the model contains 3 modules, each of which use a subset of signals from entities, use soft attention to attend to rele- | Category-level | Descriptive | | | | |----------------------|---------------|-------|------|------| | Train | Test | Train | Test | | | Total requests | 4137 | 486 | 7993 | 1082 | | Unique requests | 4123 | 486 | 6520 | 957 | | Multilabel | 934 | 126 | 0 | 0 | | Tokens per request | 7.78 | 7.95 | 7.46 | 7.65 | | Tokens per reference | 2.09 | 2.06 | 4.25 | 4.31 | | Screenshots | - | - | 336 | 42 | Table 1: Statistics for all requests in ScreenRef vant tokens of the request, and compute relevance scores for each entity with the request. We focus on two key dimensions crucial for deploying in an industrial setup- first, memory footprint; second, reusing the existing components in the pipeline. We re-use the request token embeddings generated by the upstream embedder (like Bert (Devlin et al., 2018)) and the text categories recognized by upstream. The embedded request passes through the *weight compute block*, an MLP followed by softmax, that predicts weights for each module. A request like "call the top phone number" could give high weight to location and category modules, while "call the one in Palo Alto" could give higher weight to the text and category modules. Embedded tokens also go to the *module-specific* embedder where soft attention is applied on the token embeddings to get embeddings independently for the category and location modules. For "call the top phone number", category module could attend more to 'call' and 'phone number', while location to 'top'. Modules produce scores by fusing entity features with these embeddings. Module scores are combined using the module weights to get the final score for each entity. Specifically, the final score is wcat × scat + wloc × sloc + wtext × s*text*. Let us understand the three modules. Category module. Entity categories (phone number, URL etc) are embedded using the same embedder as the request. These are pre-computed for all categories. During runtime, given an entity and a request, the embedding for the entity category is loaded and matched with the request embedding from the *module specific embedder*. Both are passed through separate MLP blocks, followed by an inner-product to compute the matching score. Location module. This takes in bounding boxes of the entity and of other entities of the same category (similar to (Yu et al., 2018)). Bounding boxes of entities [*x, y, w, h*] are normalized by K = max(Iwidth, I*height*), preserving the aspect ![5_image_0.png](5_image_0.png) ratio and featurised as: $$\left[{\frac{x}{K}},{\frac{y}{K}},{\frac{x+w}{K}},{\frac{y+h}{K}},{\frac{w*h}{K^{2}}}\right]$$ These features are concatenated and passed through an MLP. Embedding from the *module specific embedder* is passed through a separate MLP, and lastly an inner product gives the module score. Text module. We do not embed the screen texts but instead use string matching features. This choice is made for three main reasons. First, we observe in the user study users typically use the text and not synonyms of the text present on screen when making references. Second, our entities of interest, like numbers, emails, URLs make little sense to embed due to their content and presence of OOVs. Third, screens can have a large number of texts. Embedding so many texts in run time could cause compute overhead. Hence, instead of embedding, we utilize the texts by designing simple features like: is the text fully contained in the request, word overlap after removing stopwords, digit overlap. Along with matching the request to the entity text, we also match with the entity's neighboring texts (sorted by distance). All features are concatenated and passed through an MLP to get the module score. Since the Category-level data has multi-label instances (eg. 'take me there' could refer to a URL or an address), we use a threshold (a hyperparameter obtained from fine-tuning on val data as 0.7) to get final predictions. We add intermediate supervision on the module weights by annotating ∼500 samples each for ordinal references (labelled for high weight to location module), references using visible text (text module), and simple reference (category module). Modular nature offers memory efficiency, giving an option to skip running some modules which get very low weights for a request. It also provides flexibility for varied reference resolution use cases, eg. scenarios with only entity categories available. Note that SRR is only 1MB in size, does not need access to DOM or view hierarchy and hence can work on any screen, in fact any context where recognized texts are available, including documents. ## 6 Results Experimental Setup. Data is split into train/val/test in 80/10/10 ratio. To avoid data leak in Descriptive data, we split the data by screens, thus all requests for a screen are in one set. Table 1 summarizes the overall statistics for the dataset. We randomly pick a negative sample for each positive sample and use binary cross entropy loss and Adam optimizer with an initial learning rate of 4 × 10−4. Metrics. We use two metrics to measure performance. First, exact match accuracy indicates whether the predicted entities, after applying the threshold, exactly match the true entities (if an additional entity crosses the threshold or one of the true entities doesn't, exact match is 0). Second, top-1 error indicates whether the entity with the highest score, regardless of threshold, is in the ground truth entities. This is useful as often only the top prediction is used by downstream. Results. We summarize the results in Table 2. Dataset Model Top-1 Err. EM Category- Heuristic 6.5 87.5 level SRR **1.1 89.9** Cat. Oracle 0.0 100 No text Oracle 0.0 100 Descriptive Heuristic 25.0 74.2 SRR **14.2 78.7** Cat. Oracle 54.0 0.0 No text Oracle 32.6 45.5 | Modules in SRR | | | | |------------------|----------|------|-------------| | Category | Location | Text | Top-1 Error | | ✓ | ✓ | ✓ | 14.2 | | × | ✓ | ✓ | 31.2 | | ✓ | ✓ | × | 33.7 | | ✓ | × | ✓ | 35.3 | | × | ✓ | × | 49.7 | | × | × | ✓ | 51.5 | | ✓ | × | × | 54.9 | Observe that the performance on Category data is higher than on the Descriptive data, indicating the challenging nature of the latter. SRR reduces the relative top-1 error by 43% on Descriptive and 83% on Category-level data compared to the baseline. The oracles know the true category hence get perfect results on category-level data. Their low performance on Descriptive reflects the importance of all inputs, particularly screen texts. We carry out an ablation study on the model (Fig. 3). It shows that each attribute and thus each module is critical in understanding the references. ## 7 Conclusion We explore a new user experience of executing actions on screen elements with Voice Assistants. To make interactions more natural, we explore the use of references. An important decision was what UI elements to support. We decided to use texts that are most commonly used for task oriented dialogue and, commonly present on phone screens and easy to classify. We collected a dataset of requests and proposed solutions to understand references. This is a step towards making Voice Assistants more context aware, but there is a lot more context. We hope that our work will motivate further research towards this goal, and towards semantic visual text referencing. ## Limitations Our work explores a dimension of context understanding by Voice Assistants but it is only a small step. Firstly, we only consider 5 categories, while screens have a myriad of other texts and visual content. We do not include image context into our reference understanding models. But users could use them when formulating references to texts near them. Using image captions or some pixels would improve coverage. Our system leverages entities extracted by upstream and hence is bounded by the performance of that. Also our model evaluates each entity separately while there may be benefit in considering the entire screen holistically. ## Ethics Statement This work aims at improving user experiences with voice assistants. By allowing users to refer to entities on screen, it reduces user friction and enables a smoother and more natural experience. No voice assistant usage log data was used and all requests were collected by recruited annotators. ## Acknowledgements We would like to thank Hadas, Lucia and Kyanh for their help with the data annotations, revisions and data quality check; Sachin and Dhivya for help with data planning; Melis and Junhan for support in modelling experiments; as well as Lin and Murat for general direction. ## References Pollfish. https://www.pollfish.com. Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusiñol, Ernest Valveny, C. V. Jawahar, and Dimosthenis Karatzas. 2019. Scene text visual question answering. Richard A. Bolt. 1980. "put-that-there": Voice and gesture at the graphics interface. In *Proceedings of* the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '80, page 262–270, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Heiko Drewes, Alexander De Luca, and Albrecht Schmidt. 2007. Eye-gaze interaction for mobile phones. In *Proceedings of the 4th International Conference on Mobile Technology, Applications, and Systems and the 1st International Symposium on Computer Human Interaction in Mobile Technology*, Mobility '07, page 364–371, New York, NY, USA. Association for Computing Machinery. Yu-Chung Hsiao, Fedir Zubach, Maria Wang, et al. 2022. Screenqa: Large-scale question-answer pairs over mobile app screenshots. *arXiv preprint* arXiv:2209.08199. Thomas E Hutchinson, K Preston White, Worthy N Martin, Kelly C Reichert, and Lisa A Frey. 1989. Human-computer interaction using eye-gaze input. IEEE Transactions on systems, man, and cybernetics, 19(6):1527–1534. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In *Proceedings of the 2014 conference on empirical methods in* natural language processing (EMNLP), pages 787– 798. Tae Soo Kim, DaEun Choi, Yoonseo Choi, and Juho Kim. 2022. Stylette: Styling the web with natural language. In *Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems*, CHI '22, New York, NY, USA. Association for Computing Machinery. Gierad P Laput, Mira Dontcheva, Gregg Wilensky, Walter Chang, Aseem Agarwala, Jason Linder, and Eytan Adar. 2013. Pixeltone: A multimodal interface for image editing. In *Proceedings of the SIGCHI Conference on Human Factors in Computing Systems*, pages 2185–2194. Gang Li and Yang Li. 2022. Spotlight: Mobile ui understanding using vision-language models with a focus. arXiv preprint arXiv:2209.14927. Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. 2020. Mapping natural language instructions to mobile UI action sequences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8198–8210, Online. Association for Computational Linguistics. Yang Li, Gang Li, Xin Zhou, Mostafa Dehghani, and Alexey Gritsenko. 2021. Vut: Versatile ui transformer for multi-modal multi-task user interface modeling. *arXiv preprint arXiv:2112.05692*. Alice Ljungholm. Voice interaction vs screen interaction when controlling your music-system. In CONFERENCE IN INTERACTION TECHNOLOGY AND DESIGN, page 103. Ewa Luger and Abigail Sellen. 2016. " like having a really bad pa" the gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 5286–5297. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 11–20. Diako Mardanbegi and Dan Witzner Hansen. 2011. Mobile gaze-based screen interaction in 3d environments. In *Proceedings of the 1st conference on novel gazecontrolled applications*, pages 1–4. Sven Mayer, Gierad Laput, and Chris Harrison. 2020. Enhancing mobile voice assistants with worldgaze. In *Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems*, CHI '20, page 1–10, New York, NY, USA. Association for Computing Machinery. Panupong Pasupat, Tian-Shun Jiang, Evan Liu, Kelvin Guu, and Percy Liang. 2018. Mapping natural language commands to web elements. In *Proceedings* of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4970–4976, Brussels, Belgium. Association for Computational Linguistics. Xuejian Rong, Chucai Yi, and Yingli Tian. 2017. Unambiguous text localization and retrieval for cluttered scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5494–5502. Xuejian Rong, Chucai Yi, and Yingli Tian. 2019. Unambiguous scene text segmentation with referring expression comprehension. IEEE Transactions on Image Processing, 29:591–601. Julia Rozanova, Deborah Ferreira, Krishna Dubba, Weiwei Cheng, Dell Zhang, and Andre Freitas. 2021. Grounding natural language instructions: Can large language models capture spatial information? arXiv preprint arXiv:2109.08634. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pages 8317–8326. Alexandra Vtyurina, Adam Fourney, Meredith Ringel Morris, Leah Findlater, and Ryen W. White. 2019. Bridging screen readers and voice assistants for enhanced eyes-free web search. In *The World Wide Web* Conference, WWW '19, page 3590–3594, New York, NY, USA. Association for Computing Machinery. Bryan Wang, Gang Li, and Yang Li. 2022. Enabling conversational interaction with mobile ui using large language models. *arXiv preprint arXiv:2209.08655*. Nancy Xu, Sam Masling, Michael Du, Giovanni Campagna, Larry Heck, James Landay, and Monica Lam. 2021. Grounding open-domain instructions to automate web support tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1022–1032, Online. Association for Computational Linguistics. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pages 1307–1315. Xiaoyi Zhang, Lilian de Greef, Amanda Swearngin, Samuel White, Kyle Murray, Lisa Yu, Qi Shan, Jeffrey Nichols, Jason Wu, Chris Fleizach, Aaron Everitt, and Jeffrey P Bigham. 2021. Screen recognition: Creating accessibility metadata for mobile applications from pixels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery. ## A Appendix A.1 **Annotation Guidelines For Category Level** Request Collection In this project, you will be shown an entity category (phone number, url etc). Assume you see a particular instance of that entity on your screen. You have to come up with various requests you would say to a Voice Assistant to perform action on that. The main idea is to provide varied natural ways of interacting with that entity. The request should be one which holds valid when looking at different kinds of images containing that entity. Consider you see that entity on the screen. Do not assume any other information about that entity, like what digits occur in the number or what place is the address for (note to readers - such references are the focus of the unambiguous request collection, hence skipped here). - Take me to the California address - Incorrect - Call the number ending in 99 - Incorrect - Take me to COUNTRYNAME address - Incorrect - Call COMPANYNAME number - Incorrect 1. You are encouraged to use varied request formulations with different ways of referring to the entity as well as carrying out different actions on that entity. Example - Phone number - place a call to that phone number - dial this number - add this to my contacts - remind me to call here at 5 - send this to PERSON on text message 2. Constraints to follow - (a) Enter requests only in column 1 and do not change the values in column 2 in any way. (b) The number in the second column reflects the number of unique requests so far. i. When you enter a request and the number does not increase, this means that request is already present. CHANGE the request. ii. When you enter a request and the number in column 2 increases by 1, you have successfully entered a unique request. Move to the next row. (c) Do not make minor irrelevant variations. Replace proper nouns with uppercase tags like "PERSON", "COMPANYNAME", "DAY". i. Incorrect - - send this to Mom on text message - send this to Dad on text message - send this to John on text message ii. Correct - - send this to PERSON on text message - share the number with PERSON (d) Use **only lowercase letters** in the request, apart from the proper noun replacements with all uppercase tags (PERSON, COMPANYNAME etc). Use these only in a way that one can replace them with any name without knowing the actual screen. The request should hold valid for a variety of different screens containing phone numbers. i. Incorrect - Send this to PERSONNAME on text message - first letter should be small ii. Incorrect - copy PERSONNAME's number - assumes you see PERSONNAME iii. Correct - send this to PERSONNAME on text message iv. Correct - send PERSONNAME'S number to this number - here PERSONNAME can be any person in your contacts. (e) No fullstops after the request. 1. Incorrect - call this. 2. Correct - call this (f) No trailing or leading whitespaces should be added. (g) Assume you see the entity in front of you. Target the request to ask a VA to act on the entity type mentioned. Do not just add the entity type in the request randomly. Do not assume anything more about what you see. Invalid requests - - tell me the number that just called me - the request is not about a phone number you are seeing - "did I just receive a call from that number" is a valid request - is PERSON's number in my missed calls - the request is not about a number you are seeing - "is this number in my missed calls" is valid - get rid of COMPANYNAME's number - You may not be seeing a company name- get rid of their number is valid (h) Use varied ways to refer to the entities. For instance, for 'phone number', You can use generic references like "this", "that" as well as phrases including "phone number", "contact number" etc. - call that - call this contact number - call them - call this number (i) You need not explicitly use the phrase mentioning the entity type always, specially if the intent conveys that. Example - Email address - draft a mail to this - draft them a mail (j) Use *varied ways of referring* to the entity - generic phrases - this/that/it/them/.... etc - specific phrases - email/email address/address/contact/... etc ## A.2 Annotation Guidelines For Unambiguous Request Collection Overview The goal of this task is to generate a variety of requests for text in a screen. The requests should be queries or requests you would make to ![9_image_0.png](9_image_0.png) ![9_image_1.png](9_image_1.png) a voice assistant, based on the text. You will be shown a screen with a green bounding box around specific text. You will need to: Write three uniquely referential requests about the marked text for a voice assistant ## A.2.1 Green Vs. Red Boxes Screens will contain green and red boxes. The green box contains the text for which you need to write the requests. The requests for the text within the green box need to uniquely identify it. Red boxes mark the texts that are similar to the text within the green box. For example, if an image has three phone numbers, the red box will capture the other two phone numbers. Do not write requests for the text within the red box. They are intended to serve as a guidance so that you don't miss them out and ensure you write uniquely referential requests for the text within the green box. ## A.2.2 Request Guidelines Imagine you are viewing that screen on your phone, and were to ask a voice assistant about that text you came across. What would you ask the voice assistant regarding the text that you could not gather just from looking at it? What additional actions or requests would you ask the voice assistant to execute in relation to the text that can be carried out on your mobile device? Keep in mind the following: - Unique: Each request will require a referring expression that uniquely identifies the detected text. - Require a voice assistant's help: Requests should not be questions that a user can answer simply by looking at the text. Example: "Does this phone number contain 007 at the end" is invalid. - Mix it up: Requests can be a mix of questions about the text or action commands to be executed on the text. - Sound natural: Come up with requests that would sound natural, coming from a user. Verbally say the request out loud to ensure it sounds natural and not too long. - Make sense for a user to request a VA: Think about whether the request would make sense for a user to request, based upon the text type/context of the screen, and what a user would usually do on a device with that information. Uniquely Referential Use references that ensure the request uniquely identifies the marked text. All 3 requests for a particular text need to be uniquely referential. Use varied actions and request types. Do not use the same reference across the requests. Remember that the request needs to be uniquely referential, not just with other similar texts marked in red, but also with all content within the screen. Example **errors**: - Too General: 1. Call that 2. Text it to John 3. Save that to my notes - Same references for all 3 requests: 1. Call the third number 2. Share the third number 3. Copy the third number to my notes
chen-etal-2023-generate
Generate-then-Retrieve: Intent-Aware {FAQ} Retrieval in Product Search
https://aclanthology.org/2023.acl-industry.73
Frequently Asked Question (FAQ) retrieval aims at retrieving question-answer pairs for a given a user query. Integrating FAQ retrieval with product search can not only empower users to make more informed purchase decisions, but also enhance user retention through efficient post-purchase support. Providing FAQ content without disrupting user{'}s shopping experience poses challenges on deciding when and how to show FAQ results. Our proposed intent-aware FAQ retrieval consists of (1) an intent classifier that predicts whether the query is looking for an FAQ; (2) a reformulation model that rewrites query into a natural question. Offline evaluation demonstrates that our approach improves 12{\%} in Hit@1 on retrieving ground-truth FAQs, while reducing latency by 95{\%} compared to baseline systems. These improvements are further validated by real user feedback, where more than 99{\%} of users consider FAQs displayed on top of product search results is helpful. Overall, our findings show promising directions for integrating FAQ retrieval into product search at scale.
# Generate-Then-Retrieve: Intent-Aware Faq Retrieval In Product Search Zhiyu Chen Jason Choi Besnik Fetahu Oleg Rokhlenko Shervin Malmasi Amazon.com, Inc. Seattle, WA, USA {zhiyuche,chojson,besnikf,olegro,malmasi}@amazon.com ## Abstract Customers interacting with product search engines are increasingly formulating informationseeking queries. Frequently Asked Question (FAQ) retrieval aims to retrieve common question-answer pairs for a user query with question intent. Integrating FAQ retrieval in product search can not only empower users to make more informed purchase decisions, but also enhance user retention through efficient post-purchase support. Determining when an FAQ entry can satisfy a user's information need within product search, without disrupting their shopping experience, represents an important challenge. We propose an intent-aware FAQ retrieval system consisting of (1) an intent classifier that predicts when a user's information need can be answered by an FAQ; (2) a reformulation model that rewrites a query into a natural question. Offline evaluation demonstrates that our approach improves Hit@1 by 13% on retrieving ground-truth FAQs, while reducing latency by 95% compared to baseline systems. These improvements are further validated by real user feedback, where 71% of displayed FAQs on top of product search results received explicit positive user feedback. Overall, our findings show promising directions for integrating FAQ retrieval into product search at scale. ## 1 Introduction Product search engines help users find relevant products across large product catalogues and generate sales revenue for e-commerce companies (Grover and Teng, 2001). While such engines are primarily designed to handle keyword searches for products, customer behavior has been changing with an increase in users asking information-seeking service or product related questions (Carmel et al., 2018; Gao et al., 2019). However, most product search engines are not effective at handling non-product search related queries (e.g., "return a package"). Providing correct answers to these Frequently Asked Questions ![0_image_0.png](0_image_0.png) (FAQs) (Gupta and Carvalho, 2019; Mass et al., 2020) is essential to provide a positive pre- and post-purchase experience, which can lead to improved user retention and trust. Product search and FAQ retrieval are typically powered by independent retrieval systems. This separation is often due to the challenges in combining multiple answering sources (e.g. product details and FAQs) into a holistic retrieval application (Park et al., 2015; Christmann et al., 2022). Furthermore, determining what answering source can satisfy the user's information need is challenging to perform at scale. Hence, e-commerce websites tend to isolate FAQ search functionality from product search. For example, Apple offers vertical search where users are required to navigate among different tabs (e.g., product search, support, store location).1 Such designs require users to navigate multiple links, which can lead to increased user effort and unsatisfactory shopping experiences (Siraj et al., 2020; Nain and Awasthi, 2021; Su et al., 2018). Therefore, we propose to integrate FAQ retrieval into a product search engine, so that users can search products and access FAQs seamlessly from an unified search interface. 763 A potential solution is *aggregated search*, which refers to the task of searching and assembling information from a variety of sources and presenting them in a single unified interface (Murdock and Lalmas, 2008; Wan et al., 2008; Lu et al., 2012). The main challenge here lies in determining **when** and how information from multiple *verticals* should be presented effectively and efficiently. When to Show FAQ Results? Query intent is inherently ambiguous (Krovetz and Croft, 1992; Song et al., 2007; Sanderson, 2008). Figure 1 illustrates an example where the users can use the same query "*apple tv bluetooth*" to retrieve products, or find information about Bluetooth connectivity. In the latter case, the query is intended to express the question, "*How do I connect a Bluetooth device to* my Apple TV?". We define a query that can be answered by an FAQ entry as having *question intent*. It is important to note that a query with question intent may also have product search intent, as queries can be inherently ambiguous. Determining when a user query can be answered by either product or *FAQ search* is tightly coupled with predicting the user's information need. Due to query ambiguity, displaying answers from FAQ sources for all searches causes high false positive rates, due to the lack of question intent by the user. Our analysis (cf. §5.1) from a leading ecommerce site shows that if FAQs would be shown to all queries, 98% of the FAQs would be irrelevant to user's needs. As users mostly use product search for shopping, injecting FAQ results that are irrelevant or not needed causes significant friction in the user experience. Furthermore, performing FAQ retrieval for every query is inefficient (Tsur et al., 2016) since only a small portion of traffic has question intent (White et al., 2015).2 While we cannot disclose the intent distribution of our data for reasons of confidentiality, question intent represent a minor portion of the overall query traffic. To address the problems above, we train an intent classifier that distinguishes when a query has question intent, and thus, can be answered by an FAQ source. In mixed retrieval scenarios, this allows us to trigger FAQ retrieval and show FAQ results only for queries with question intent, causing less friction for users. In terms of efficiency, running FAQ retrieval only on question intent queries significantly improves latency. Our experiments validate 2In the case of the Bing search engine only 10% of queries were shown to have question intent. that deploying intent classifier brings a 95% latency reduction compared to baselines without the intent classifier. Lastly, we demonstrate that existing techniques such as upsampling are enough to achieve satisfactory performance in classifying question intent in imbalanced traffic. How to Show FAQ Results? When a search query has question intent, an interface that jointly displays product search and FAQ retrieval results is required. As in prior work on aggregating web search results (Diaz, 2009), we integrate the top-1 FAQ result alongside product search results, as illustrated in Figure 1, for the following two reasons. First, since product search is the core functionality of e-commerce search engines, majority of the space is dedicated to the ranked product list. If we consider additional modalities such as mobile search, space constraints are even greater. Displaying more FAQ results comes at the cost of reducing the number of product results, which can lead to reduced revenue (Feng et al., 2007). Second, compared to product search, where users are required to compare multiple options, question intents generally require less exploration since users already have a specific request in their mind. Given the above reasons, we need to optimize the FAQ retrieval system for **high precision** at the top ranks (i.e., Hit@1). Queries are usually short and consists of several keywords. To achieve a high precision for FAQ retrieval, we propose to rewrite a query with question intent into a more specific natural language question. This rewriting process aims to make the queries semantically and syntactically more similar to the questions found in FAQs than the original queries, inspired by previous studies of query reformulation (Zhao et al., 2011; Zheng et al., 2011; Yu et al., 2020). Our experiments validate that through query reformulation we can achieve significantly higher accuracy in retrieving ground-truth question at first rank (Hit@1) with more than 13% improvement when compared to using original queries for FAQ retrieval. Contributions We summarize our contributions in this paper as follows: - To our best knowledge, this is the first work to integrate FAQ retrieval and product search at scale. - Our proposed intent-aware FAQ retrieval approach is a practical solution that significantly improves performance compared to baseline methods. Our approach achieves a 13% higher preci- sion in the top-ranked results (Hit@1) and is 95% more efficient than baseline methods. - To evaluate our design from a user's perspective, we reviewed feedback from users who interacted with a deployed version of our system. Results showed that 71% of the rendered top-1 FAQ results (at the query level) received explicit positive customer feedback when displayed along with product search results. ## 2 Related Work FAQ retrieval. The problem of FAQ retrieval has been extensively studied. Early methods (Whitehead, 1995; Sneiders, 1999) rely on exact keyword matching in FAQs. Karan et al. (2013) propose to derive lexical features such as n-gram overlap and TF-IDF similarity from a query-FAQ pair, and use these features to train a SVM model to classify whether the query is relevant to an FAQ. Karan and Šnajder (2016) combine the scores from BM25, and a classical vector space model to rank the FAQ based on its semantic similarity to the query. More recently, deep learning methods have been applied to FAQ retrieval. Gupta and Carvalho (2019) adapt a sentence matching model (Wang et al., 2017) based on bidirectional LSTM to aggregate question-to-query and question-to-answer similarities. Building on the success of BERT in NLP tasks, Sakata et al. (2019) and Mass et al. (2020) have adopted BERT to rank answers or questions of FAQs using user queries. However, all of the experiments from early work assume that the FAQ retrieval system is deployed independently and all input queries have a question intent. In our work, we evaluate our proposed search interface in a more realistic setting by simulating search traffic queries with both product search and questions intents (with significant imbalance). We argue that this setting is more suitable for studying the benefits of aggregated search interfaces (Murdock and Lalmas, 2008) in large-scale e-commerce businesses. Keyword-to-Question Generation. Generating questions from search queries is an example of query rewriting that is first used in communitybased question answering websites (e.g., Yahoo! Answers, and Quora) to retrieve related questions. Zhao et al. (2011) propose a template-based method to rewrite keyword-based queries into questions by first extracting a set of query-question pairs from search engine logs. Then for each input query, the most relevant templates are retrieved to generate questions. A similar method is proposed by Zheng et al. (2011), but the difference is allowing users to refine the generated question with generated refinement keywords. Ding and Balog (2018) use a statistical model to synthesize keyword-question pairs which are then used to train a neural model (Gu et al., 2016). However, the synthesized queries are noisy and additional filtering mechanism has to be used to improve performance. Recently, Iovine et al. (2022) proposed a bidirectional keyword-question rewriting model that leverages non-parallel data through cycle training. Their experiments showed that sequence-tosequence text generation models can perform the keyword-to-question task with high accuracy, and improve retrieval results in various scenarios. Inspired by earlier keyword-to-question generation approaches, we utilize the state-of-the-art generation models to reformulate keyword queries into questions for FAQ retrieval. Experiments show that our reformulation model trained from human annotated query-question pairs significantly improves Hit@1 by 13% compared to using original query. ## 3 Method Our intent-aware FAQ retrieval approach, shown in Figure 2, consists of two main components: (1) an intent classifier that takes a user query as input and determines whether it has question intent and can be answered by an FAQ; (2) a query reformulation model which reformulates queries with question intent into a natural language question that is used for FAQ retrieval. Regardless of query intent, the product search is always performed.3 ## 3.1 Question Intent Classification Unlike prior work on FAQ retrieval, we do not assume that all input queries have question intent. Instead, we train a binary intent classifier that takes an input query and predict its intent into: (1) nonquestion intent; (2) question intent. The intent classifier corresponds to a fine-tuned RoBERTa model (Liu et al., 2019) trained for the binary classification task. To handle class imbalance, we oversample the minority class (question intent) to approximate a balanced class distribution. ![3_image_0.png](3_image_0.png) ## 3.2 Query Reformulation Once a query is classified as having question intent, the query is reformulated into a natural language question. We train a sequence-to-sequence Transformer model (Vaswani et al., 2017) that reformulates the query into a question. The natural language question is used for FAQ retrieval, which are discussed in details in §4.1. The assumption behind our method is that generated questions are syntactically closer to FAQ questions than the original keyword queries, which can bring additional improvements in FAQ retrieval. ## 3.3 Proposed Intent-Aware Faq Retrieval As illustrated in Figure 2, once a query is identified with question intent and is reformulated into a natural language question, the FAQ retrieval component takes it as input and returns the top-1 FAQ result. If a query does not contain a question intent, we do not initiate the FAQ retrieval process and only return product search results. Our retrieval component ranks FAQ results solely based on questions, without utilizing the associated answers. The rationale behind this is that a well-reformulated question should closely match the ground-truth question of an FAQ entry, allowing a simple ranking component to accurately position the correct FAQ at the top position using only the question as input. We claim that our intentaware FAQ approach can satisfy users' information needs whether they are looking for FAQs, qualified products, or both, and therefore provides a more convenient pre-purchase and post-purchase experience. ## 4 Experimental Setup In this section, we first discuss datasets used, followed by implementation details of retrieval baselines, experimental settings and evaluation metrics. ## 4.1 Datasets Intent Classification Dataset As the majority of search queries issued to e-commerce websites are focused on product search, annotations on random samples of queries yield only a tiny fraction of queries with question intent. This is not suitable for our needs, as we require a training set that includes a balanced distribution of intents. This challenge is addressed by applying several cycles of (1) training an intent classifier; (2) generating predictions on a set of unseen queries; (3) select question-intent queries with high probability (>0.9) for the next annotation; (4) correct model predictions through human annotations. For the first cycle, when human annotations were not yet available, we utilized an existing keyword extraction algorithm (RAKE) (Rose et al., 2010) on a publicly available product question corpus (Rozen et al., 2021) to generate pairs of (question, query with question intent). For example, given a product question "How do I connect a Bluetooth device to my Apple TV" , we can extract keywords "connect Bluetooth device Apple TV" as the corresponding query with question intent. For queries with shopping intent, we randomly sampled queries from our traffic and applied a simple filter (e.g., removing queries that start with a question word). Based on these initial training samples, we prototyped our first model and improved it through subsequent human annotations. More details on intent classifier training are summarized in §4.2. Overall, our annotations resulted in 18,972 queries including 5,562 question-intent queries and 13,410 non-question queries. We allocate 50% of the data as the training set, 25% as the validation set, and the remaining 25% as the testing set. Note that since there are multiple cycles of training and annotation, the distribution of intent in this set does not reflect the distribution of the real traffic. Query Reformulation Dataset From the intent classification dataset, we randomly sample 1,500 question intent queries from the training set and 500 question intent queries from the testing set. For each query, we ask annotators to reformulate it into a natural language question. The resulting 1,500 query-question pairs form the training set is used for training reformulation models, while the remaining 500 pairs are divided evenly into validation and testing sets. FAQ Corpus For FAQ retrieval, we annotate each query in the query reformulation test set with a ground-truth FAQ, ensuring full coverage for the test set in our experiments.4 ## 4.2 Implementation Details For Our Method Intent Classifier We fine-tuned RoBERTa large model (Liu et al., 2019) on the intent classification dataset (cf. §4.1). Due to significant class imbalance, we upsampled (with repetition) the question-intent queries until balanced distribution of intents are satisfied (we cannot disclose the exact sampling ratio). Standard cross entropy (CE) loss is adopted. This model is trained for 8 epochs with 2e-6 learning rate and 256 batch size, which is distributed evenly on 8 NVIDIA K80 GPUs. We use Adam as our optimizer and early stopping (*patience* = 10) is used to prevent overfitting. Reformulation Model We train two reformulation models using BART-base (Lewis et al., 2019) and T5-base (Raffel et al., 2020), respectively. Both models take a user query with question intent as input, and output its reformulated question. The CE loss is adopted to train the models by maximizing the likelihood of generating the human's reformulation. This model is trained for 10 epochs with 1e-5 learning rate and a single Tesla A100 GPU. We use Adam as our optimizer, and batch size of 16. The training is halted using early stopping (*patience* = 3). All intent classifier and reformulation models are implemented using HuggingFace.5 ## 4.3 **Implementation Details For Faq Retrieval** We evaluate our approach for FAQ retrieval on the following ranking models of different complexity: BM25 (Robertson et al., **2009)** We use BM25 as an unsupervised FAQ retrieval model. All the 4We cannot disclose the performance on the full internal FAQ database. 5https://huggingface.co/ FAQ questions are indexed using Lucene.6 For a user query, we retrieve top-50 documents based on BM25 scores regardless of the query intent. It is possible that some queries with non-question intent may not return any results. SentTrans (Reimers and Gurevych, **2019)** We adopt a sentence encoder model7that is trained on Google's Natural Question corpus (Kwiatkowski et al., 2019) to compute the similarity between queries reformulations and FAQ questions. For retrieval, we rank FAQ questions based on their cosine similarity against either the query or its rewritten question. BERT (Devlin et al., **2019)** We fine-tune a pointwise ranker using BERT on our query reformulation dataset to rank a query against all FAQs. For a query, we treat the ground-truth reformulation as the positive sample and randomly sample 100 reformulations of other queries as negative samples. Finally, the FAQ questions are ranked based on the classification score w.r.t the query. Hinge loss function is applied to train the model for 10 epochs with a batch size of 25. BERT-Rerank (Dai and Callan, **2019)** Directly ranking all FAQs with BERT is computationally expensive. A more efficient approach is to rerank the top-k results of BM25 using BERT. In our experiments, we test k with 10 and 50. ## 4.4 Simulating Intent Classification With Retrieval Baselines Although the FAQ retrieval approaches discussed in §4.3 were not originally designed for intent classification, we include them as baselines to approximate false matches on real traffic. Specifically, we evaluate the probability of FAQ results shown to users who do not have the intent of seeking FAQs (reflected by precision), as well as the likelihood of FAQ results not appearing for users who are specifically searching for FAQs (reflected by recall), in the presence and absence of our intent classifier. For BM25, we consider a query to have question intent if it meets both requirements: 1. The number of returned FAQ results is more than a threshold x; 2. BM25 score of the top-1 result is larger than a threshold y. 6https://lucene.apache.org/ 7https://www.sbert.net/docs/pretrained-models/nq-v1.html By default, we set x = 1 and y = 0, which means at least one FAQ is returned for a question intent query. To obtain optimal BM25 results, we use the validation set for fine-tuning the thresholds (x = 40, y = 5). Similarly for SentTrans, we find the optimal cosine similarity threshold (0.6) based on the validation set, and classify queries with question intent if an FAQ is retrieved above the threshold. ## 4.5 Evaluation Metrics To measure the intent classification performance, we report results on precision, recall and F1. We report mean reciprocal rank (MRR) and Hit@1 to evaluate the FAQ retrieval performance. Hit@1 is the most critical metric since only the top-1 retrieved FAQ result will be displayed to users. ## 5 Experiments And Results We study the following research questions: RQ1: How much does using the question intent classifier as a filtering step benefit the integrated FAQ retrieval pipeline? RQ2: How effective is query-to-question reformulation for FAQ retrieval? RQ3: How efficient is our intent-aware FAQ retrieval approach when integrated with a product search engine? ## 5.1 Question Intent Classification The intent classifier determines when to trigger FAQ retrieval and display the FAQ results alongside the product search results. Table 1 shows the evaluation results on intent classification. For reasons of confidentiality, we report results as relative differences with the baseline method (BM25). The details on approximating precision and recall for baselines are summarized in §4.4. Since the evaluation set is highly imbalanced, BM25 has the lowest F1 score with the majority of queries being classified as question intent queries, leading to extremely high recall and low precision. Even with optimal thresholds, BM25 obtains only an F1 score that is 48% lower than our method. Although SentTrans outperforms BM25, it still falls significantly behind our method. To answer RQ1, intent classifier improves precision by 26% and recall by 51% compared to the strongest SentTrans baseline. These results indicate that question intent classifier is required because other baselines cannot effectively distinguish queries with question intent. | Methods | Precision | Recall | F1 | |---------------------|-------------|----------|-------| | BM25 | 0.00 | 0.00 | 0.00 | | BM25 (Optimal) | +0.41 | -0.60 | +0.36 | | SentTrans (Optimal) | +0.54 | -0.54 | +0.46 | | Our Method | +0.80 | -0.03 | +0.84 | Table 1: Results of question intent classification. Scores are relative to BM25. | Retrieval Model | Input Query Type | MRR | Hit@1 | |----------------------|----------------------|-------|---------| | Original | 0.00 | 0.00 | | | BM25 | Reformulation (BART) | +0.07 | +0.10 | | Reformulation (T5) | +0.11 | +0.17 | | | Original | +0.01 | +0.01 | | | Reformulation (BART) | +0.08 | +0.12 | | | SentTrans | Reformulation (T5) | +0.13 | +0.19 | | Original | +0.02 | +0.02 | | | BERT (pointwise) | Reformulation (BART) | +0.04 | +0.06 | | Reformulation (T5) | +0.07 | +0.09 | | | Original | +0.05 | +0.07 | | | BERT-Rerank (top-10) | Reformulation (BART) | +0.10 | +0.15 | | Reformulation (T5) | +0.14 | +0.20 | | | Original | +0.08 | +0.08 | | | BERT-Rerank (top-50) | Reformulation (BART) | +0.10 | +0.13 | | Reformulation (T5) | +0.14 | +0.19 | | Table 2: Results of FAQ retrieval by different retrieval models and query types. Scores are relative to BM25 using the original queries. ## 5.2 Faq Retrieval For each retrieval system being compared, there are three input types: (1) original query, (2) BART reformulated query, and (3) T5 reformulated query. Table 2 summarizes the FAQ retrieval results. Similar to Section 5.1, we only report numbers relative to the BM25 baseline where original queries are used (first row in Table 2). First, we observe that using the reformulated query improves the performance of all FAQ retrieval systems in terms of MRR and Hit@1, which answers RQ2. T5 query reformulations consistently show better results than BART query reformulations. Even for the most competitive BERTRerank baselines, T5 can further improve Hit@1 by more than 10%. The results indicate that our reformulation method has the advantage of improving the precision at top ranks. Although our reformulation models are not designed to generate exact questions from FAQs,8they generate questions that are sufficiently similar to the FAQs in our corpus. As a result, they significantly improve the retrieval performance. 8This is a challenging annotation task because workers must match queries against our entire FAQ corpus. Second, Table 2 shows that reformulations allow BM25 to achieve comparable results with strong BERT-Rerank baselines. Notably, BM25 with T5's reformulated queries outperforms two BERT-Rerank baselines using the original queries. Yet, when using T5's reformulations, BERT-Rerank (top-10) achieves the best MRR and Hit@1 among all methods, outperforming the original query by 13% in Hit@1, and 9% in MRR. Considering the model's complexity, the combination of BM25 and the reformulation method is a promising FAQ solution in industry settings. ## 5.3 Efficiency Comparison An important consideration in industry settings is the efficiency of the proposed solution. We assess the impact of our proposed solution in terms of computational cost. As BERT Rerank (top-10) with T5 reformulation yields the best results, we compare its inference time with and without an intent classifier. We measure the speed on a single p3.8xlarge instance,9 with a batch size of 16. Table 3 shows the inference time10 normalized by the baseline that does not use an intent classifier. For RQ3, our proposed intent-aware FAQ retrieval system that reranks top-10 results using BERT and T5 reformulations can save on average 95% of inference time, which is a significant benefit for real-world applications. We also observed that despite adding an extra layer of inference from reformulation model, the increase in latency is negligible since majority of queries are already filtered out and only a small fraction of queries are reformulated. | Pipeline | Query | Inference Time | |----------------------|----------------------|------------------| | BERT-Rerank (top 10) | Original | 1.0 | | Original | 0.0404 (↓95.96%) | | | Ours | Reformulation (BART) | 0.0446 (↓95.54%) | | Reformulation (T5) | 0.0461 (↓95.39%) | | Table 3: Inference speed comparison on full query traffic (including question intent and non-question intent queries) with and without intent classifier. The inference time is normalized based on the time taken by the method that does not utilize an intent classifier. ## 6 Online Deployment Our intent-aware FAQ retrieval solution is deployed and integrated into the product search interface of a leading global e-commerce website. The live version uses a much larger FAQ corpus than the ones used here. When FAQ results are displayed, we collect optional explicit user feedback on whether the answer is helpful or not, as demonstrated in Figure 1. Over one month of traffic was collected from the US marketplace, showing that 71% of the rendered FAQ results received explicit positive customer feedback. The feedback results were aggregated at the query level, allowing each query to receive multiple positive feedback responses. These findings demonstrate that our practical solution is not only effective in offline evaluation, but also helpful to real users. ## 7 Conclusion We proposed to integrate FAQ retrieval with product search to address challenges in aggregated search from an e-commerce perspective. Our approach first classifies queries with question intent, which then reformulates into natural language questions that are used to retrieve FAQs. Offline experimental results show that on the best-performed FAQ retrieval system (i.e., BERT-Rerank (top 10)), the proposed intent classifier saves a substantial amount of inference costs (95%) and also improved retrieval performance through query reformulation by 13% on Hit@1. These improvements are also reflected in online evaluation: over one month of user feedback demonstrated that about 71% of the rendered FAQ results were considered to be helpful. Overall, the findings in this work suggest promising directions for e-commerce platforms to support FAQ retrieval at scale without disrupting customer's shopping experience. ## Limitations And Future Work One limitation of our approach is that we do not display or rank multiple reformulations. It is possible that a query can be reformulated into multiple possible questions. For example, the query "apple tv bluetooth" can be reformulated into "How do I connect a Bluetooth device to my Apple TV" or "*Does apple TV support Bluetooth*". In our future work, we aim to explore the integration of multiple reformulations into the FAQ retrieval process to further enhance the overall user experience. Another limitation is that we do not train an end-to-end FAQ retrieval model. In the future, we plan to train the FAQ retrieval model using the reformulations so that the original query can directly be used. ## References David Carmel, Liane Lewin-Eytan, and Yoelle Maarek. 2018. Product question answering using customer generated content-research challenges. In *The 41st* International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1349– 1350. Philipp Christmann, Rishiraj Saha Roy, and Gerhard Weikum. 2022. Conversational question answering on heterogeneous sources. In *Proceedings of the 45th* International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 144–154. Zhuyun Dai and Jamie Callan. 2019. Deeper text understanding for ir with contextual neural language modeling. In *Proceedings of the 42nd international* ACM SIGIR conference on research and development in information retrieval, pages 985–988. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of* the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Fernando Diaz. 2009. Integration of news content into web results. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, pages 182–191. Heng Ding and Krisztian Balog. 2018. Generating Synthetic Data for Neural Keyword-to-Question Models. In *Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval*, pages 51–58, Tianjin China. ACM. Juan Feng, Hemant K Bhargava, and David M Pennock. 2007. Implementing sponsored search in web search engines: Computational evaluation of alternative mechanisms. *INFORMS Journal on Computing*, 19(1):137–148. Shen Gao, Zhaochun Ren, Yihong Zhao, Dongyan Zhao, Dawei Yin, and Rui Yan. 2019. Product-aware answer generation in e-commerce question-answering. In *Proceedings of the Twelfth ACM International* Conference on Web Search and Data Mining, pages 429–437. Varun Grover and James TC Teng. 2001. E-commerce and the information market. *Communications of the* ACM, 44(4):79–86. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequenceto-sequence learning. In *Proceedings of the 54th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pages 1631– 1640, Berlin, Germany. Association for Computational Linguistics. Sparsh Gupta and Vitor R Carvalho. 2019. Faq retrieval using attentive matching. In *Proceedings of the 42nd* International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 929–932. Andrea Iovine, Anjie Fang, Besnik Fetahu, Jie Zhao, Oleg Rokhlenko, and Shervin Malmasi. 2022. CycleKQR: Unsupervised bidirectional keywordquestion rewriting. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11875–11886, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Mladen Karan and Jan Šnajder. 2016. Faqir–a frequently asked questions retrieval test collection. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016, Brno, Czech Republic, September 12-16, 2016, Proceedings 19, pages 74–81. Springer. Mladen Karan, Lovro Žmak, and Jan Šnajder. 2013. Frequently asked questions retrieval for croatian based on semantic textual similarity. In *Proceedings of the* 4th Biennial International Workshop on Balto-Slavic Natural Language Processing, pages 24–33. Robert Krovetz and W Bruce Croft. 1992. Lexical ambiguity and information retrieval. *ACM Transactions* on Information Systems (TOIS), 10(2):115–141. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. *Transactions of the* Association for Computational Linguistics, 7:453– 466. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. *CoRR*, abs/1910.13461. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*. Wei Lu, Qi Wang, and Birger Larsen. 2012. Simulating aggregated interfaces. In Workshop on aggregated search, pages 24–28. Yosi Mass, Boaz Carmeli, Haggai Roitman, and David Konopnicki. 2020. Unsupervised faq retrieval with question generation and bert. In *Proceedings of the* 58th Annual Meeting of the Association for Computational Linguistics, pages 807–812. Vanessa Murdock and Mounia Lalmas. 2008. Workshop on aggregated search. In *ACM SIGIR Forum*, volume 42, pages 80–83. ACM New York, NY, USA. A Nain and A Awasthi. 2021. E-tourism: A study of tourist satisfaction. International Journal of Trend in Scientific Research and Development, 5(6). Seonyeong Park, Soonchoul Kwon, Byungsoo Kim, Sangdo Han, Hyosup Shim, and Gary Geunbae Lee. 2015. Question answering system using multiple information source and open type answer merge. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 111–115, Denver, Colorado. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® *in Information Retrieval*, 3(4):333–389. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text mining: applications and theory, pages 1–20. Ohad Rozen, David Carmel, Avihai Mejer, Vitaly Mirkis, and Yftah Ziser. 2021. Answering product-questions by utilizing questions from other contextually similar products. *arXiv preprint* arXiv:2105.08956. Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. Faq retrieval using queryquestion similarity and bert-based query-answer relevance. In *Proceedings of the 42nd International ACM* SIGIR Conference on Research and Development in Information Retrieval, pages 1113–1116. Mark Sanderson. 2008. Ambiguous queries: test collections need more sense. In *Proceedings of the* 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 499–506. Ahsan Siraj, J Guo, MW Kamran, Q Li, and Y Zhu. 2020. Characteristics for e-satisfaction in e-retailingevidence from chinese e-commerce. International Journal of Innovation, Creativity and Change, 12(5):375–376. Eriks Sneiders. 1999. Automated faq answering: Continued experience with shallow language understanding. In *Question Answering Systems. Papers from the* 1999 AAAI Fall Symposium, pages 97–107. Ruihua Song, Zhenxiao Luo, Ji-Rong Wen, Yong Yu, and Hsiao-Wuen Hon. 2007. Identifying ambiguous queries in web search. In Proceedings of the 16th international conference on World Wide Web, pages 1169–1170. Ning Su, Jiyin He, Yiqun Liu, Min Zhang, and Shaoping Ma. 2018. User intent, behaviour, and perceived satisfaction in product search. In *Proceedings of* the Eleventh ACM International Conference on Web Search and Data Mining, pages 547–555. Gilad Tsur, Yuval Pinter, Idan Szpektor, and David Carmel. 2016. Identifying web queries with question intent. In Proceedings of the 25th International Conference on World Wide Web, pages 783–793. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing* systems, 30. Stephen Wan, Cecile Paris, and Alexander Krumpholz. 2008. From aggravated to aggregated search: Improving utility through coherent organisations of an answer space. In *Proceedings of the SIGIR 2008* Workshop on Aggregated Search, volume 3. Citeseer. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In *Proceedings of the Twenty-Sixth* International Joint Conference on Artificial Intelligence, IJCAI-17, pages 4144–4150. Ryen W White, Matthew Richardson, and Wen-tau Yih. 2015. Questions vs. queries in informational search tasks. In *Proceedings of the 24th international conference on World Wide Web*, pages 135–136. Steven D Whitehead. 1995. Auto-faq: An experiment in cyberspace leveraging. *Computer Networks and* ISDN Systems, 28(1-2):137–146. Shi Yu, Jiahua Liu, Jingqin Yang, Chenyan Xiong, Paul Bennett, Jianfeng Gao, and Zhiyuan Liu. 2020. Fewshot generative conversational query rewriting. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 1933–1936. Shiqi Zhao, Haifeng Wang, Chao Li, Ting Liu, and Yi Guan. 2011. Automatically generating questions from queries for community-based question answering. In *Proceedings of 5th international joint conference on natural language processing*, pages 929–937. Zhicheng Zheng, Xiance Si, Edward Y. Chang, and Xiaoyan Zhu. 2011. K2Q: generating natural language questions from keywords with user refinements. In Fifth International Joint Conference on Natural Language Processing, IJCNLP 2011, Chiang Mai, Thailand, November 8-13, 2011, pages 947–955. The Association for Computer Linguistics.
jia-etal-2023-kafa
{KAFA}: Rethinking Image Ad Understanding with Knowledge-Augmented Feature Adaptation of Vision-Language Models
https://aclanthology.org/2023.acl-industry.74
Image ad understanding is a crucial task with wide real-world applications. Although highly challenging with the involvement of diverse atypical scenes, real-world entities, and reasoning over scene-texts, how to interpret image ads is relatively under-explored, especially in the era of foundational vision-language models (VLMs) featuring impressive generalizability and adaptability. In this paper, we perform the first empirical study of image ad understanding through the lens of pre-trained VLMs. We benchmark and reveal practical challenges in adapting these VLMs to image ad understanding. We propose a simple feature adaptation strategy to effectively fuse multimodal information for image ads and further empower it with knowledge of real-world entities. We hope our study draws more attention to image ad understanding which is broadly relevant to the advertising industry.
# Kafa: Rethinking Image Ad Understanding With Knowledge-Augmented Feature Adaptation Of Vision-Language Models Zhiwei Jia˚ UC San Diego Pradyumna Narayana Google Arjun R. Akula Google Garima Pruthi Google Hao Su UC San Diego Sugato Basu Google Varun Jampani Google ## Abstract Image ad understanding is a crucial task with wide real-world applications. Although highly challenging with the involvement of diverse atypical scenes, real-world entities, and reasoning over scene-texts, how to interpret image ads is relatively under-explored, especially in the era of foundational vision-language models (VLMs) featuring impressive generalizability and adaptability. In this paper, we perform the first empirical study of image ad understanding through the lens of pre-trained VLMs. We benchmark and reveal practical challenges in adapting these VLMs to image ad understanding. We propose a simple feature adaptation strategy to effectively fuse multimodal information for image ads and further empower it with knowledge of real-world entities. We hope our study draws more attention to image ad understanding which is broadly relevant to the advertising industry. ## 1 Introduction As advertisements play an integral role in human society, image ad understanding has many real-world applications such as ad targeting (Hussain et al., 2017), visual metaphor understanding (Abokhoza et al., 2019) and creative ad generation (Chilton et al., 2019; Akula et al., 2022). It is also highly challenging due to several reasons, as exemplified in Fig. 2. *First*, image ads consist of diverse visual elements including non-photorealistic objects and atypical scenes synthesized creatively that are beyond common academic datasets. *Secondly*, they involve knowledge of a large number of real-world entities such as brands and products where existing work (Su et al., 2018; Li et al., 2022a) struggles to cover. *Lastly*, many adopt visual or multimodal rhetorics requiring reasoning over diverse visual elements including scene-texts, and sometimes even elude humans (Petridis and ˚ Work done in part during an internship at Google. Correspondence to zjia@eng.ucsd.edu. 772 ![0_image_0.png](0_image_0.png) ![0_image_1.png](0_image_1.png) Chilton, 2019). However, image ad understanding is relatively under-explored in the machine learning community, especially in the presence of recently developed foundational vision-language models (VLMs) pre-trained using a tremendous number of image and text description data. The pre-trained VLMs are shown to have great generalization capability, contain real-world knowledge (implicitly), and can be adapted to a wide range of downstream tasks in a data-efficient way (Radford et al., 2021; Alayrac et al., 2022). It is then natural to utilize VLMs for image ad understanding. In this paper, we perform the first empirical study of adapting VLMs to the task of decoding the overall messages delivered by image ads, which is usually formulated as visual question answering (Hussain et al., 2017). Specifically, we examine three popular pre-trained VLMs that are alignmentbased and are publicly available, namely, CLIP (Radford et al., 2021), ALBEF (Li et al., 2021) and LiT (Zhai et al., 2022). We examine zero-shot performance as well as adaptation strategies and reveal the practical challenges of applying VLMs to image ads. We propose a simple feature adaptation strategy that effectively utilizes VLM features. We further propose to incorporate external brand knowledge (real-world entities) that brings a signif- ![1_image_0.png](1_image_0.png) ## Icant Performance Boost. Our contributions are three-fold. **First**, we empirically find that the sheer scale of data & capacity of the model used in pretraining matters the most for the performance of image ad understanding, partly due to VLM's capability of storing real-world knowledge, which is not captured well by the commonly used metrics for comparing VLMs. **Second**, we reveal the practical challenges of adapting VLMs for image ad understanding (i.e., overfitting to the limited training data & supervision signals and high computation burden of hard negative mining) and propose a simple solution (attention-based feature adaptation) that better leverages VLM features than previous adaptation strategies. **Lastly**, we propose to leverage external knowledge for brand understanding that we have empirically shown to further enhance image ad understanding. Together with the aforementioned adaptation strategy, we call our approach knowledge-augmented feature adaptation (KAFA). ## 2 Related Work Image Ad Understanding Learning to automatically interpret image ads was proposed by the Pitt Image Ads Dataset (Hussain et al., 2017), where each ad is annotated by a caption that answers "what should I do according to the ad and why?" Different from traditional image captioning, this task is highly non-trivial as discussed at the beginning of Sec. 1. While prior methods utilize cultural connotations via external symbolic knowledge (Ye and Kovashka, 2018), capture relations between scene-texts and objects by GNNs (Dey et al., 2021), and leverage pre-trained language models to combine multimodel information (Kalra et al., 2020), none have exploited vision-language models (VLMs) and the knowledge of real-world entities (i.e., brands). Besides the wide applications in the ad industry, later work hints that the study of image ads is relevant to much broader research topics (Singh et al., 2019; Akula et al., 2022). Foundational Alignment-based VLMs A recent surge of collections of tremendous images paired with text descriptions (Schuhmann et al., 2022) enables alignment-based pretraining (i.e., contrastive learning) of foundational VLMs that are efficient zero-shot or low-shot learners for downstream tasks. By learning to embed images and texts into a shared semantic space, they handle domain variations in an open-vocabulary manner (which involves real-world knowledge). Among these are CLIP (Radford et al., 2021), ALIGN (Jia et al., 2021), LiT (Zhai et al., 2022) and BASIC (Pham et al., 2021). Another line of work further adopts masked language modeling, image captioning loss, and object-level alignment, e.g., ALBEF (Li et al., 2021), Florence (Yuan et al., 2021), CoCa (Yu et al., 2022) and GLIP (Li et al., 2022b). Transfer Learning of VLMs Transfer learning of VLMs has become popular with the zero-shot performance of CLIP in image classification tasks. A direct approach is to (partially) fine-tune the VLMs with (optionally) additional neural networks tailored for downstream tasks, e.g., TAP-C (Song et al., 2022), CPT (Yao et al., 2021), KAT (Gui et al., 2021) and VL-Adapter (Sung et al., 2022). Another approach that bypasses the need of tuning the VLMs is prompt learning. For instance, CoOp (Zhou et al., 2022b) and CoCoOp (Zhou et al., 2022a) only tune learnable inputs to the VLMs. The third approach that further reduces memory and computation burden is feature adapters, where VLM features of the inputs are pre-computed before transfer learning. Examples are CLIP-Adapter (Gao et al., 2021), SVL-Adapter (Pantazis et al., 2022) and Attention-Adapter (Zhao et al., 2022). Knowledge-Augmented Image Understanding Many image understanding tasks require real-world knowledge beyond what can be captured by the input data. For instance, FVQA (Wang et al., 2017) and OK-VQA (Marino et al., 2019) require models to process external fact-based knowledge; TextVQA (Singh et al., 2019) asks to understand named entities in the wild; the Pitt Dataset (Hussain et al., 2017) involves recognition of large quantities of brands. Existing work incorporates external knowledge either explicitly via structured or unstructured knowledge base (Wang et al., 2015; Gardères et al., 2020; Ye and Kovashka, 2018), or implicitly from knowledge stored in pretrained models (Kalra et al., 2020; Kim et al., 2022), or both (Marino et al., 2021; Gui et al., 2021). ## 3 What Really Matters For Pre-Trained Vlms In Image Ad Understanding? The first insight of our empirical study is that the sheer size of data and the model used in pretraining is the key factor determining the performance of VLMs for image ad understanding. To promote reproducibility, we evaluate three alignment-based VLMs (i.e., CLIP, ALBEF and LiT) that are publicly accessible in a zero-shot manner on the Pitt Dataset (Hussain et al., 2017), which formulates ad understanding as image-to-text retrieval. We adopt the official evaluation protocol, which asks the model to select one of the 3 correct messages conveyed by the image ad from a set of 15 candidates (including 12 wrong messages) for each of the 12805 test samples. Specifically, given an alignment-based VLM, let us denote its encoders with normalized outputs as fI p¨q and fT p¨q for image and text branches, respectively. Given an image x and the ground truth texts y, the VLM retrieves y from candidates Cpxq according to the dot-product score fI pxq ¨ fT pyq. We then measure the performance of the VLM with 3 metrics commonly used in the literature: *accuracy* (the percentage of images with any positive text retrieved with rank one), *rank* (how the top retrieved positive text is ranked averaged over all images), and the mean rank (the mean rank of the all positive texts averaged over all images). With the results reported in Tab. 1, we have several findings. *First*, the more data used during the pretraining of a VLM, the better it generalizes to the image ad domain. For a comparison, CLIP has seen 400M image-text pairs, LiT 100M, and ALBEF 14M. *Second*, the larger the capacity of a model, the better it understands image ads. We have evaluated different sizes of the CLIP model beyond the three sizes shown in Tab. 1 and the trend keeps the same. *Third*, commonly used metrics for comparing VLMs, including zero-shot accuracy on the ImageNet (Russakovsky et al., 2015) validation set (for which LiT claims to outperform CLIP) and image-to-text retrieval precision on Flickr30K (Young et al., 2014) (for which ALBEF claims to outperform CLIP), do not reflect the performance of image ad understanding well. We hypothesize that this is partly because image Acc Ò Rank Ó m. Rank Ó ViLBERT (Lu et al., 2019) 61.8 1.860 4.190 VS (v1) (Dey et al., 2021) 86.8 1.264 3.072 BERT-FT (Kalra et al., 2020) 89.7 1.230 2.982 ALBEF (Li et al., 2021) 57.6 2.220 4.935 ALBEF (ft. on Flickr30k) 64.2 2.242 5.125 ALBEF (ft. on MSCOCO) 64.0 2.002 4.651 LiT (L16L) (Zhai et al., 2022) 64.0 1.849 4.268 CLIP (ViT-B/32) (Radford et al., 2021) 88.1 1.213 2.937 CLIP (ViT-B/16) 92.2 1.123 2.694 CLIP (ViT-L/14@336px) 95.2 1.069 2.547 KAFA (ours) **97.4 1.033 2.391** ad understanding requires knowledge of real-world entities (e.g., brands) which the pre-trained models contain. Similar to the dramatic performance advancement of GPT language models (Brown et al., 2020) driven by the larger scale of training data and the model capacity, more knowledge can be distilled and implicitly stored in the weights of pre-trained VLMs with larger models and more pre-training data. We empirically verify that the VLM's capability of recognizing brands from images is aligned with its performance of decoding the messages from the ads. See results in Tab. 4. ## 4 Challenges In Vlm Adaptations To Image Ads And An Intuitive Solution With CLIP as the clear champion, we further study VLM adaptations for image ad understanding using the best CLIP model (ViT-L/14@336px) as the backbone. We aim to enable better performance for image ad understanding by better adapting pretrained VLMs to the image ad domain with the help of additional information such as scene-texts extracted from the image. ## 4.1 The Issue Of Overfitting And High Computation Complexity We find two practical challenges in adapting pretrained VLMs to the image ads, *first*, the overfitting issue in fine-tuning due to limited image ads and the lack of a strong supervision signal, and *second*, the high computation burden caused by solutions to the previous challenge. Annotations of image ads are hard to obtain in general (Akula et al., 2022), making it common to only have limited training data (e.g., the Pitt Dataset only contains 51,223 image-text pairs). This results in VLM's vulnerability to overfitting during adaptation. We find that directly fine-tuning CLIP contrastively on the Pitt Dataset with the symmetric cross-entropy loss (as in the original CLIP paper) gives worse performance than the zero-shot one unless we adopt early stopping and a carefully tuned learning rate schedule. Moreover, as reported in Tab. 1, the best zero-shot performance of CLIP already surpasses the previous state-of-the-art and is very close to 100%, leading to very weak supervision signals for vanilla fine-tuning. We thus need strong training signals. To save GPU memory required by much larger batch sizes, we adopt hard negative mining (Xuan et al., 2020), which selects hard negatives from a very large candidate set as opposed to within the mini-batch. However, hard negative mining (HNM) strategies usually incur a large computation burden. In fully online hard negative mining (denoted full HNM), for each training image x and the corresponding texts y, we first rank N*cand* negative texts ty|y ‰ yu sampled from the entire training data according to the online similarity scores (the dot-product score fI pxq ¨ fT pyq computed from the current VLM model), and then we choose the N*hard* ´ 1 most similar y as the hard negatives. While this essentially constructs a much harder candidate set Cpxq, it requires the computation of features of all training texts at every gradient step, which is prohibitively expensive. Existing methods propose to reduce the complexity by keeping a sparsely updated bank of all training text features (memory bank) (Wu et al., 2018) or with the help of a momentum-updated text encoder (MoCo) (He et al., 2020). Nevertheless, we tailor these methods to our setup1and find that they perform worse than full HNM. We report the accuracy (%) in Tab. 2 with a harder eval protocol than the official one by using larger numbers (K) of negative samples randomly drawn from the test texts (thus a set of harder negatives). We believe this is because image ad understanding requires fine-grained information extraction (e.g., the specific brand of a product) and both these 1We use these methods to compute similarity scores but still only select the hardest negatives for fine-tuning to save GPU memory (the purpose of HNM). | Number of Candidates K | 20 | 100 | 500 | 1000 | |--------------------------|------|-------|-------|--------| | Zero-shot | 91.7 | 80.7 | 64.4 | 56.5 | | Direct FT | 92.4 | 82.2 | 66.7 | 59.0 | | Direct FT + memory bank | 92.8 | 82.9 | 67.5 | 60.3 | | Direct FT + MoCo | 93.3 | 83.8 | 69.8 | 62.4 | | Direct FT + full HNM | 93.7 | 84.6 | 70.0 | 62.9 | two strategies are subject to the destruction of such information as they compute the loss not in a fully online manner. In particular, their text features used for contrastive fine-tuning always come from different VLM encoders, either the past checkpoints or the momentum-updated versions). Although direct fine-tuning with full HNM outperforms the others, it is extremely inefficient and thus impractical. ## 4.2 Feature Adaptation As The Solution We propose a simple and intuitive solution, attention-based feature adaptors, that both handle the aforementioned issues during adaptations and enable incorporating additional information (e.g., scene-texts) for better image ad understanding. Feature adapters are recently proposed (Gao et al., 2021; Zhang et al., 2021; Pantazis et al., 2022; Zhao et al., 2022) as a line of very efficient adaptation strategies of VLMs. They freeze the weights of the pretrained VLMs, pre-compute features using their encoders, and use additional lightweight adapter networks to process these features. As a result, on-the-fly feature computation over a massive candidate set becomes computationally feasible and so is the fully online hard negative mining, since we only compute the adapted features online via a lightweight network. More efficiently, we can set the text adapter to an identity function (i.e., only use adapters for image features). More importantly, feature adapters are suitable for fusing info from multiple sources. While previous feature adapters are mostly designed for image classification, we consider it as a strategy to aggregate multiple input branches (of potentially different modalities). For instance, previous methods for image ad understanding, such as VS (Dey et al., 2021), utilize scene-texts extracted from images (by OCR) to enhance its performance. Similarly, we can extract text features from scene-texts using a VLM's text encoder and merge them with the ![4_image_0.png](4_image_0.png) image features extracted by the image encoder (of the same VLM) via a feature adapter. In doing so, we obtain a better representation of image ads. Specifically, we propose to adopt one layer of multi-head attention (Vaswani et al., 2017) as our feature adapter design, similar to the TinyAttention Adapter (Zhao et al., 2022). Here the input sequence to the attention layer varies by modalities (brand, scene-texts and image, as in Fig. 4) instead of temporally or spatially as commonly in Transformers. By the nature of alignment-based VLMs, all information (whether in the text format as the scene-texts or the visual elements) are embedded as vectors and lie in a shared semantic space. We then utilize this property and fuse complementary information (e.g., image features and scene-text features) into one feature. Moreover, we append a linear layer after the attention features and equip it with a residual connection. Let us use the notation in previous sections and further denote xst as the scene-texts extracted from the image x (by Google OCR APIs). Then our adapter is represented as $$,f_{T}(x_{s t}),\ldots$$ ## F Attpxq " Npfi Pxq ` Arfi Pxq, Ft Pxstq, ...Sr0Sq where np¨q is a normalization function and A is multi-head attention (we leave room for other input branches by leaving "..." here). Note that we do not use any adapter for the text descriptions of images (the labels of image ads), which further reduces the computation complexity as now we only need to compute and cache all text features in the training set once and for all during full HNM. In comparison, we also evaluate the popular CLIP-Adapter (Gao et al., 2021) as a strong baseline, which we tailor to our setup by training three 2-layer residual MLPs. Please see the Appendix for implementation details. As reported in Tab. 3, our proposal of using an attention-based adapter (denoted KAFA w/o K) utilizes VLM features well by aligning multimodal features already in the same semantic space and outperforms CLIPAdapter. While other existing work (Shen et al., 2021; Gui et al., 2021) merges multiple branches of information by leveraging foundation models, they rely on large encoder-decoder networks that are computationally intensive and might not work well with limited training data as in our case. ## 5 Improving Image Ad Understanding With External Knowledge To further improve image ad understanding, we propose to leverage external knowledge of real-world entities, namely product and brand information. The major focus of advertisements is to promote brand awareness (Macdonald et al., 2003). Sometimes brand information is even a necessity to interpret ads correctly since it eliminates ambiguities and gives visual cues to the audiences (e.g., the ad for a cleaning product in Fig. 2). It is then natural to empower feature adapters introduced previously with a brand understanding module that extracts brand information from images. Here we present our training-free brand understanding module that considerably exploits VLMs. ## 5.1 Brand Understanding Module Extracting brand information from an image is very challenging due to the sheer scale of brands in the real world. Existing published work (Su et al., 2018; Li et al., 2022a) and even commercial APIs tend to fall short of a good coverage. To solve this ![5_image_0.png](5_image_0.png) issue, we construct a knowledge base that covers brands much better than existing datasets. Our knowledge base has the format, KFC: KFC is a fast food chain, with around 110k entries covering names of brands, companies, organizations and others appearing in image ads. We call this dataset BrandSet-110K (see details in Appendix B). Next, we take an ensemble approach to detect and retrieve relevant brand entries from BrandSet110K given an image ad. On one hand, we retrieve brands by performing string matching over all names in BandSet-110K using the scene-texts extracted by OCR from the image. On the other hand, in case of OCR failures, no detection (some logos have no texts), or multiple detected entries (potentially false positives as most image ads promote only one brand at a time), we use a more powerful vision-based module. Specifically, we adopt MAVL (Maaz et al., 2022), a state-of-the-art VLM, to propose object regions according to the text prompt "all trademarks", We then use the best CLIP model to perform region classification based on a set of carefully engineered prompts. And then, we select the best entries in BrandSet-110K according to the proposed regions. We finally use some simple rules to combine the retrieved results from text-matching and the vision-based module, as in Fig. 3 (see details in the Appendix). Overall, our brand understanding module is training-free, covers much more entities than previously published work, and even outperforms some commercial logo detection APIs by evaluation on a small validation set, as reported in Tab. 4 ## 5.2 Overall Pipeline And Final Results Combining with our proposed brand understanding module, we illustrate our overall pipeline in Fig. 4 and call this approach knowledge-augmented feature adaptation (KAFA). In Tab. 3, we demonstrate that KAFA achieves substantial improvements in image ad understanding over the VLM baseline | Method | Inputs | 20 | 100 | 500 | 1000 | |----------------------|----------|------|-------|-------|--------| | Zero-shot | I | 91.7 | 80.7 | 64.4 | 56.5 | | Direct FT + full HNM | I | 93.7 | 84.6 | 70.0 | 62.9 | | CLIP-Adapter | I+ST | 93.9 | 85.0 | 70.2 | 62.8 | | KAFA w/o K | I+ST | 95.0 | 86.8 | 72.7 | 65.1 | | KAFA w/o ST | I+K | 94.7 | 86.5 | 72.3 | 64.5 | | KAFA (ours) | I+ST+K | 95.6 | 87.7 | 73.9 | 66.0 | | Acc (%) | Acc (%) | | | |-------------------|-----------|------------------------|------| | VLM-based (ALBEF) | 14.5 | Text-matching | 36.0 | | VLM-based (LiT) | 29.0 | Google Cloud API | 42.0 | | VLM-based (CLIP) | 64.4 | Combined (Text + CLIP) | 66.6 | and consistently outperforms other ablation versions with fewer inputs, justifying that our proposed brand understanding module helps to further improve image ad understanding. We present an example in Fig. 1 to illustrate the improvement of our method over the baseline, where for better display we only show 2 negative text descriptions. See more examples in Appendix G. ## 6 Additional Analysis 6.1 Hard Negative Samples In Evaluations We report our main results with a harder eval protocol than the official one. In fact, it is a challenge to perform effective evaluations in retrieval tasks (Akula et al., 2020). While we need **hard** negatives to better reflect the capabilities of a model, usually by increasing the size of the candidate set, we also want those hard negatives to be real **negatives**. As illustrated in Fig. 5 (right), two companies can have two different image ads that share a very similar ![6_image_0.png](6_image_0.png) message. Hence, given an image, simply using a text of another as the negative might not work. There is no easy solution. We can use a generic sentence encoder to measure similarities among different texts in (Hussain et al., 2017) and only sample texts that are semantically different from the target one (the ground truth) as negatives. We adopt a strong sentence encoder (publicly available here) based on MiniLM (Wang et al., 2020) to measure semantic similarities. We compute similarities among descriptions of the same ad and those across different ads. The similarity distributions are spread out, as demonstrated in Fig. 5 (left), without easy cutoff thresholds to make negative samples both hard and truly negative. Instead, we propose to use several different sizes K of the candidate set with K " 20, 100, 500, 1000. For each image in the Pitt Dataset (Hussain et al., 2017), we randomly choose a text from the ground truth and uniformly sample K ´ 1 negatives from other images (harder negatives with larger K). While most existing methods evaluate (Hussain et al., 2017) with the official evaluation protocol (for ease of comparison we also provide results by this protocol in Tab. 1), it suffers from the lack of hard negatives. Each image ad comes with only 15 randomly sampled candidate texts including 3 positives, giving a random model a 20% accuracy. Moreover, negatives are easy as they tend to be semantically distinct from the positives, making it hard to examine a model at finer levels. We provide examples to compare negatives sampled in our protocol and in the official one in Appendix E. ## 6.2 Data Leakage Regarding Vlms The CLIP (Radford et al., 2021) model we use in our experiments was pre-trained on a tremendous ![6_image_1.png](6_image_1.png) amount (400M) of image-text pairs on the Internet. A concern is that there might be data leakage, i.e., the pre-trained VLMs might have already seen images in the evaluation set, leading to inflated results. We perform an analysis to conclude that this is unlikely the case. We manually inspect images in the LAION-400M dataset (Schuhmann et al., 2021) that are semantically similar to a set of randomly sampled 100 eval image-text pairs. While the dataset used to train CLIP is not publicly released, LAION-400M is a very close one with a similar scale of data filtered by the CLIP model. Specifically, for each of the 100 random samples, we use the open-sourced CLIP-retrieval tool (here) to find the closest images from LAION-400M indexed by both the sample text and image. We do not find any substantially overlapped content or near duplicates (see Fig. 6 as an example). Moreover, our proposed method achieves significant performance improvement over the VLM baseline and both are based on the same CLIP model. Therefore, data leakage is less of a concern. ## 7 Conclusion In this paper, we study the adaptation of pretrained alignment-based VLMs for the challenging image ad understanding task. We benchmark and reveal practical challenges in adapting VLMs, propose a simple and intuitive (yet effective) strategy for feature adaptations, and further improve image ad understanding with external brand knowledge. While we mainly focus on the image-to-text retrieval task for its simplicity, we believe further studies can extend it to directly generating text descriptions given image ads or even generating image ads given the descriptions. We hope our study draws more attention to image ad understanding that are relevant to the advertising industry and provide insights for a broader machine learning community. ## Limitations The data from the Pitt Dataset (Hussain et al., 2017), while useful for our paper, contains many images and annotations that may perpetuate harmful stereotypes according to sensitive characteristics such as gender and carry the risk of amplification by machine learning models. We plan to collaborate with AI robustness researchers to identify such examples and develop methods for improving ML models in terms of robustness and reliability. ## References Reneh Abokhoza, Sherehan Hamdalla Mohamed, and Sumit Narula. 2019. How advertising reflects culture and values: A qualitative analysis study. *Journal of Content, Community and Communication*, 10(9):3. Arjun Akula, Spandana Gella, Yaser Al-Onaizan, Songchun Zhu, and Siva Reddy. 2020. Words aren't enough, their order matters: On the robustness of grounding visual referring expressions. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pages 6555–6565. Arjun R Akula, Brendan Driscoll, Pradyumna Narayana, Soravit Changpinyo, Zhiwei Jia, Suyash Damle, Garima Pruthi, Sugato Basu, Leonidas Guibas, William T Freeman, et al. 2022. Metaclue: Towards comprehensive visual metaphors research. arXiv preprint arXiv:2212.09898. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Lydia B Chilton, Savvas Petridis, and Maneesh Agrawala. 2019. Visiblends: A flexible workflow for visual blends. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–14. Arka Ujjal Dey, Suman K Ghosh, Ernest Valveny, and Gaurav Harit. 2021. Beyond visual semantics: Exploring the role of scene text in image understanding. Pattern Recognition Letters, 149:164–171. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. 2021. Clip-adapter: Better visionlanguage models with feature adapters. arXiv preprint arXiv:2110.04544. François Gardères, Maryam Ziaeefard, Baptiste Abeloos, and Freddy Lecue. 2020. Conceptbert: Concept-aware representation for visual question answering. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 489–498. Liangke Gui, Borui Wang, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, and Jianfeng Gao. 2021. Kat: A knowledge augmented transformer for vision-and-language. *arXiv preprint* arXiv:2112.08614. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pages 9729–9738. Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka. 2017. Automatic understanding of image and video advertisements. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1705–1715. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In *International Conference* on Machine Learning, pages 4904–4916. PMLR. Kanika Kalra, Bhargav Kurma, Silpa Vadakkeeveetil Sreelatha, Manasi Patwardhan, and Shirish Karande. 2020. Understanding advertisements with bert. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7542– 7547. Su Young Kim, Hyeonjin Park, Kyuyong Shin, and Kyung-Min Kim. 2022. Ask me what you need: Product retrieval using knowledge from gpt-3. *arXiv* preprint arXiv:2207.02516. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980. Chenge Li, István Fehérvári, Xiaonan Zhao, Ives Macedo, and Srikar Appalaraju. 2022a. Seetek: Very large-scale open-set logo recognition with textaware metric learning. In *Proceedings of the* IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2544–2553. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694–9705. Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. 2022b. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10965– 10975. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32. Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, and Ming-Hsuan Yang. 2022. Class-agnostic object detection with multi-modal transformer. In *The European Conference on Computer Vision. Springer*. Emma Macdonald, Byron Sharp, et al. 2003. *Management perceptions of the importance of brand awareness as an indication of advertising effectiveness*. Ph.D. thesis, Massey University, Department of Marketing. Kenneth Marino, Xinlei Chen, Devi Parikh, Abhinav Gupta, and Marcus Rohrbach. 2021. Krisp: Integrating implicit and symbolic knowledge for opendomain knowledge-based vqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14111–14121. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In *Proceedings of the IEEE/cvf conference on computer vision and pattern recognition*, pages 3195–3204. Omiros Pantazis, Gabriel Brostow, Kate Jones, and Oisin Mac Aodha. 2022. Svl-adapter: Selfsupervised adapter for vision-language pretrained models. *arXiv preprint arXiv:2210.03794*. Savvas Petridis and Lydia B Chilton. 2019. Human errors in interpreting visual metaphor. In *Proceedings* of the 2019 on Creativity and Cognition, pages 187– 197. Hieu Pham, Zihang Dai, Golnaz Ghiasi, Kenji Kawaguchi, Hanxiao Liu, Adams Wei Yu, Jiahui Yu, Yi-Ting Chen, Minh-Thang Luong, Yonghui Wu, et al. 2021. Combined scaling for openvocabulary image classification. arXiv preprint arXiv: 2111.10050. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In *International* Conference on Machine Learning, pages 8748–8763. PMLR. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. *International journal of computer vision*, 115(3):211–252. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. 2022. Laion5b: An open large-scale dataset for training next generation image-text models. *arXiv preprint* arXiv:2210.08402. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clipfiltered 400 million image-text pairs. arXiv preprint arXiv:2111.02114. Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? *arXiv preprint* arXiv:2107.06383. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pages 8317–8326. Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, and Furu Wei. 2022. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. arXiv preprint arXiv:2203.07190. Hang Su, Shaogang Gong, and Xiatian Zhu. 2018. Scalable deep learning logo detection. *arXiv* preprint arXiv:1803.11417. Yi-Lin Sung, Jaemin Cho, and Mohit Bansal. 2022. Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5227–5237. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. *Advances in neural information processing systems*, 30. Peng Wang, Qi Wu, Chunhua Shen, Anthony Dick, and Anton Van Den Hengel. 2017. Fvqa: Factbased visual question answering. *IEEE transactions on pattern analysis and machine intelligence*, 40(10):2413–2427. Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, and Anthony Dick. 2015. Explicit knowledge-based reasoning for visual question answering. *arXiv preprint arXiv:1511.02570*. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers. *Advances in Neural Information Processing Systems*, 33:5776–5788. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. 2018. Unsupervised feature learning via nonparametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3733–3742. Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. 2020. Hard negative examples are hard, but useful. In *European Conference on Computer Vision*, pages 126–142. Springer. Yuan Yao, Ao Zhang, Zhengyan Zhang, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2021. Cpt: Colorful prompt tuning for pre-trained vision-language models. *arXiv preprint arXiv:2109.11797*. Keren Ye and Adriana Kovashka. 2018. Advise: Symbolism and external knowledge for decoding advertisements. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pages 837–855. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. 2022. Coca: Contrastive captioners are image-text foundation models. *arXiv preprint arXiv:2205.01917*. Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. 2021. Florence: A new foundation model for computer vision. *arXiv preprint arXiv:2111.11432*. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. 2022. Lit: Zero-shot transfer with locked-image text tuning. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18123–18133. Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2021. Tip-adapter: Training-free clip-adapter for better vision-language modeling. *arXiv preprint* arXiv:2111.03930. Hongyu Zhao, Hao Tan, and Hongyuan Mei. 2022. Tiny-attention adapter: Contexts are more important than the number of parameters. arXiv preprint arXiv:2211.01979. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022a. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16816–16825. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. 2022b. Learning to prompt for visionlanguage models. *International Journal of Computer Vision*, 130(9):2337–2348. ## A Scene-Text Extraction By Ocr In our paper, we use scene-texts as one of the inputs for experiments in Pitt Dataset (Hussain et al., 2017). We use the Google Cloud OCR API (link) to extract all text tokens, which are grouped by paragraphs by the API. We then group paragraphs into blocks by simple heuristic rules (e.g., two consecutive paragraphs with similar font sizes should be considered in the same block) and then filter out those blocks with an average prediction confidence score (provided by the API) less than 0.7. ## B Brand Recognition B.1 Brandset-110K We construct BrandSet-110K by first compiling entries from public websites. Specifically, for the list of topics (such as automobiles and healthcare) in the Pitt Dataset (Hussain et al., 2017), we Google with the query "Top XX brands/companies in Y" to obtain a list of thousands of common brands, organizations, etc., denote source I. We further scrape the Google Knowledge Graph Search API (link) to find a much larger list of named entities, denoted source II, whose categories fall into "brands", "companies", etc., where each entry comes with a one-paragraph description. Since results from the Knowledge Graph (KG) is a little bit noisy and might miss some popular entities, we rely on source I to make sure that the most prevalent entities appearing in our commercial world are included in our dataset. We then query entries from source I in KG to also obtain the descriptions. If such entries are not found in KG, we simply use the descriptions "X is a brand name in the industry of Y". Together with source II, we obtain a raw combined knowledge base. Then we filter out those entries that are common English words (if the entry appears in the English dictionary (link) or a word set from NLTK (link)). We do so to remove entries such as "Everyday", which will result in too many false positives during brand detection. We also remove entries consisting of a single character. Eventually, we end up with around 110K entries, i.e., name and description pairs. Since the descriptions returned by KG can be quite long, we further use a learning-based sentence parser to only select the very first sentence of the description (usually in the format of "X is a brand/company/org in the industry of Y with Z features"). We use this API (link) from Hugging ## B.2 Brand Recognition By Text-Matching The text-based brand recognition module essentially performs text matching to exhaustively search over all entries in BrandSet-110K given the scene-texts extracted by OCR. For each name in BrandSet-110K that is larger than 6 characters, we match the text in a case-insensitive manner; otherwise, we match it case-sensitively to reduce false positives. A name is set to be matched in a scenetext if it is a phrase of the text ("abc" is matched in "abc def" but not in "abcdef".) When doing ablation studies of evaluating text-matching only performance, in case of multiple predictions we randomly select one as the output. ## B.3 Vision-Based Brand Recognition The vision-based brand recognition module handles situations where the text-based one fails (when texts are too small or blurred or artistic for OCR to work; or when logos are purely graphic). The vision-based module is a pipeline of several steps. The class-agnostic region proposal (we use the best model in MAVL (Maaz et al., 2022), a state-of-theart model) is adopted to generate candidate regions that contain brand logos or visual elements revealing brand information. We choose "all trademarks" as the best prompt with other candidates such as: ## - "All Small Objects", "All Brand Logos", - "All Brand Icons", "All Brands", "All Logos" After the region proposal, we use the best CLIP (Radford et al., 2021) model (its visual encoder) to compute the region features. We include the entire image as an extra proposed region. Then we use the text features (via the CLIP text encoder) of the following 6 prompts to find the best entry in BrandSet-110K. Namely - "A brand logo of X", "A logo of X", - "A trademark of X","A brand logo of X. Y", - "A logo of X. Y", "A trademark of X. Y" where X is the name and Y is the corresponding description in BrandSet-110K. We first average dot products of the region features and brand features across all 6 prompts. We then find two candidates: (1) the name X with the largest predicted scores among all names and all regions of an image and (2) the name X with the largest predicted scores averaged across all regions among all names that are champions in at least one region. Our final output is chosen by the higher value of the dot products of the global image feature and the two text features of the prompt "'An advertisement of X" (we select this prompt after another minor prompt engineering process). ## B.4 Ensemble Of Text-Matching And Vision-Based Brand Recognition We use simple heuristic rules to ensemble the text-matching results and the vision-based ones. Specifically, if there is no name detected from textmatching, we return the vision-based result; if there is only one name detected from text-matching, we return the text-based result; if more than one name is detected from text-matching, we select the name from detection of both text and vision-based modules by the highest value of the dot product of the global image feature and the text features of "'An advertisement of X". The ensemble module finally returns the single name and the corresponding description in BrandSet-110K. ## C Network Architecture Of Attention-Based Feature Adapter We adopt a very lightweight network for feature adaptation. For each modality of the inputs (e.g., inputs to KAFA in the Pitt Dataset are three vectors: scene-text features, image features, and brand features), we first add learnable positional embedding (which is used to distinguish between different modalities) and then apply a multi-head attention layer (Vaswani et al., 2017) to obtain a list of vectors; we finally use the first vector (corresponding to the image feature input branch) and add residual connections from the input image feature (before positional embedding) to produce the final output (with normalization). To make things clearer, we also provide the pseudocode. import t o r c h . nn . P a r a m e t e r as param import t o r c h . nn . f u n c t i o n a l as F # $args\_is\_a\_list\_of\_input\_features$ # $e.g.,\_[img\_fs\_,\_scene\_text\_fs\_,\_branch]$ . $\pi=-\pi$. p o s _ e m b _ l i s t = [ ] f o r _ **in range** ( n _ i n p u t ) : p o s _ e m b _ l i s t . append ( param ( t o r c h . z e r o s ( [ i n p u t _ d ] ) ) ) a t t n = t o r c h . nn . M u l t i h e a d A t t e n t i o n ( embed_dim= i n p u t _ d , num_heads =8 , b a t c h _ f i r s t =True ) i n p u t s = [ ] f o r i in range ( n _ i n p u t ) : i n p u t s . append ( a r g s [ i ] + p o s _ e m b _ l i s t [ i ] ) x = t o r c h . s t a c k ( i n p u t s , 1) x , _ = a t t n ( x , x , x , n e e d _ w e i g h t s = F a l s e ) # The f i r s t i s t h e image f e a t u r e s . x = x [ : , 0] + a r g s [ 0 ] x = F . n o r m a l i z e ( x , dim=´1) ## D Data Cleaning On Pitt Dataset We perform data cleaning on both the training and evaluation data of the Pitt Dataset (when evaluated using the official evaluation protocol, whose issue is discussed in the main paper, we stick to the raw evaluation set). For every text in the dataset (the response to the "what should I do according to the ad and why" question), we remove invalid ones (e.g., "I don't know", "not an ad", "not sure"), fix typos (e.g., "becasue", "becaues"), and remove those without answering the "why" question. Furthermore, we filter out texts that do not mention nouns or only have nouns that are not very informative (we compile a list of non-informative nouns appearing frequently in the dataset, such as "product", "thing" and "vendor"). This step is to remove non-specific texts such as "I should buy this product because ...". In the end, we randomly select one text (with a fixed random seed) as the ground truth of its image. If an image has all its texts removed by data cleaning, we remove the image from the dataset. We find such images constituting less than 3% of all images. ## E Hard Vs. Easy Negatives For Evaluation In Pitt Dataset Here we explain why we use larger number of candidates K during evaluation. Model evaluation for cross-modal retrieval is challenging (Akula et al., 2020). The official evaluation protocol in Pitt Dataset suffers from the issue that it lacks hard negatives to fully reflect the perception and reasoning capability of the models. Each image in the protocol has 3 positive texts and only 12 negative ones, giving a random guess model a 20% accuracy. On the contrary, increasing the number of candidates in our evaluation protocol as introduced in the main paper effectively yields harder negatives. For instance, for the image ad in Fig. 7 whose ground truth is "I should buy a Brand A camera because it will help me create", if we set the number of candidates to be 10 (i.e., 9 negatives), the best CLIP model makes the correct selection with all easy negatives, among which the most confusing ones are - "I should drink Brand B because it de-ages you" - "I should not drown in my decision because bad choices will keep you under" - "I should buy this bag because it is resealable" If we set 50 total candidates (i.e., 49 negatives), again the CLIP baseline predicts correctly with the most confusing ones still being relatively easy negatives: - "I should use Brand C cosmetics because it makes you beautiful" - "I should buy Brand D products because they are reliable" - "I should see a movie because it's fun" For a larger number (e.g., 100 total candidates), the CLIP model starts to make mistakes, with hard negatives such as - "I should use Brand E makeup because it will make me more seductive" - "I should buy Brand F makeup because it will make me beautiful" - "I should buy this makeup because it will make me shine" Notice that for privacy reasons, all brand names in this example are anonymized. ## F Training Details F.1 Direct Fine-Tuning Of Clip We fine-tune the best CLIP model on the training images of Pitt Dataset with a batch size of 8, symmetric cross-entropy loss (the one used in the original paper of CLIP) and the Adam optimizer (Kingma and Ba, 2014) with weight decay of 1e´4. We set other parameters of Adam as in the original implementation of CLIP. We find that using a very small learning rate (e.g., 1e ´ 7) is necessary for fine-tuning CLIP on Pitt Dataset; otherwise, the CLIP model can overfit easily. For the same reason, we adopt early stopping and only fine-tune the model for a maximum of 4 epochs. We leave the details in the next section for the fine-tuning version with online hard negative mining (very computationally intensive as suggested in the main paper). ![12_image_0.png](12_image_0.png) ## F.2 Fully Online Hard Negative Mining (Full Hnm) When performing hard negative mining during training, for each image in a mini-batch, we first compute the VLM features of a large number of randomly sampled negative texts (in our experiments we find 1000 to be large enough; while a larger number can marginally improve the final performance but it incurs a larger computation burden), then we compute the dot products of the current image feature and all these sampled text features, and finally, we rank the dot products and select the top N ´ 1 negatives to be included in computing the gradients of the loss (we find N " 8 to be effective). We use the asymmetric version of the crossentropy loss (i.e., the normal one) compared to the asymmetric version in CLIP pre-training since the number of negatives per image does not equal the batch size when HNM is adopted. We reduce the batch size to 4 whenever with online HNM so that directly fine-tuning the largest CLIP model is viable with a single V100 Nvidia GPU. We still apply the learnable "logit scale" parameter in CLIP pre-training which effectively makes contrastive learning more stable. For full HNM, if we directly fine-tune the CLIP model, we need to compute text features of all texts in the training set in every gradient step. While this is computationally prohibitive, we adopt the feature adapter strategy and thus cache all the text features once and do not update the text encoder ![13_image_0.png](13_image_0.png) and the text features during fine-tuning. ## F.3 More Ablation Studies In our experiments presented in the main paper (specifically in Tab. 1), we have justified the use of online HNM, the additional inputs (scene-text and brand information) to the feature adaptation, and the advantages of the attention-based adapter over the baseline adapter. We also perform experiments on several variants of the attention-based feature adapter and find that either using more than one attention layer or adding layer norm & additional linear projection as in the encoder-decoder Transformer (Vaswani et al., 2017) make the model more vulnerable to overfitting. ## F.4 Additional Details Of Feature Adapters For feature adapters (CLIP-Adapter and KAFA), we use the full HNM for fine-tuning as discussed in the previous section. We use the same training setup as that of "Direct ft + HMN" except for the additional input branches. For CLIP-Adapter, we tailor it to our setup by training three 2-layer residual MLPs. Specifically, let as denote them as g mlp I, g mlp Tand h mlp, built on top of the image and text features extracted by VLMs, and a mixture of these features, respectively. The adapted feature for x becomes $$\begin{array}{c}{{f_{I}^{m l p}(x)=\mathsf{n}(f_{I}(x)+g_{I}^{m l p}(f_{I}(x)))}}\\ {{f_{T}^{m l p}(x_{s t})=\mathsf{n}(f_{T}(x_{s t})+g_{T}^{m l p}(f_{T}(x_{s t})))}}\\ {{f^{m l p}(x)=\mathsf{n}(h^{m l p}(\operatorname{cat}[f_{I}^{m l p}(x),f_{T}^{m l p}(x_{s t}),\ldots]))}}\end{array}$$ where cat is concatenation. Here we omit the adapted feature for text label y. And the adapted feature for the text label y becomes $$f_{T}^{m l p}(y)=\mathsf{n}(f_{T}(y)+g_{T}^{m l p}(f_{T}(y)))$$ which is used during full HNM for fine-tuning. For fine-tuning of both CLIP-Adapter and KAFA, we find a much larger learning rate (i.e., 1e´4) to be effective and train the model similarly with early stopping and a maximum of 10 epochs. We find it helpful to stabilize training by adding an additional regularization loss to keep the feature adapter's output close to the VLM image features. Specifically, we add the negative of dot products between the two (averaged over all data points in the mini-batch) to the overall training objective. For this regularization term, we use a coefficient of 5 in all our experiments in the Pitt Dataset. ## G Additional Examples We present 2 additional examples in Fig. 8 to illustrate the improvement of our method over the baseline. Again, we only show 2 negative text descriptions for better display, and we anonymize all brand info.
rana-etal-2023-weakly
Weakly supervised hierarchical multi-task classification of customer questions
https://aclanthology.org/2023.acl-industry.75
Identifying granular and actionable topics from customer questions (CQ) posted on e-commerce websites helps surface the missing information expected by customers on the product detail page (DP), provide insights to brands and sellers on what critical product information that the customers are looking before making a purchase decision and helps enrich the catalog quality to improve the overall customer experience (CX). We propose a weakly supervised Hierarchical Multi-task Classification Framework (HMCF) to identify topics from customer questions at various granularities. Complexity lies in creating a list of granular topics (taxonomy) for 1000s of product categories and building a scalable classification system. To this end, we introduce a clustering based Taxonomy Creation and Data Labeling (TCDL) module for creating taxonomy and labelled data with minimal supervision. Using TCDL module, taxonomy and labelled data creation task reduces to 2 hours as compared to 2 weeks of manual efforts by a subject matter expert. For classification, we propose a two level HMCF that performs multi-class classification to identify coarse level-1 topic and leverages NLI based label-aware approach to identify granular level-2 topic. We showcase that HMCF (based on BERT and NLI) a) achieves absolute improvement of 13{\%} in Top-1 accuracy over single-task non-hierarchical baselines b) learns a generic domain invariant function that can adapt to constantly evolving taxonomy (open label set) without need of re-training. c) reduces model deployment efforts significantly since it needs only one model that caters to 1000s of product categories.
# Weakly Supervised Hierarchical Multi-Task Classification Of Customer Questions Jitenkumar Rana Amazon jitenkra@amazon.com Sandeep Mukku Amazon smukku@amazon.com ## Abstract Promod Yenigalla Amazon promy@amazon.com Manan Soni Amazon sonmanav@amazon.com Chetan Aggarwal Amazon caggar@amazon.com Rashmi Patange Amazon rpatang@amazon.com Identifying granular and actionable topics from customer questions (CQ) posted on ecommerce websites helps surface the missing information on the product detail page expected by customers before making a purchase. Insights on missing information on product page helps brands and sellers enrich the catalog quality to improve the overall customer experience (CX). In this paper, we propose a weakly supervised Hierarchical Multi-task Classification Framework (HMCF) to identify topics from customer questions at various granularities. Complexity lies in creating a list of granular topics (taxonomy) for thousands of product categories and building a scalable classification system. To this end, we introduce a clustering based Taxonomy Creation and Data Labeling (TCDL) module for creating taxonomy and labelled data with minimal supervision. Using the TCDL module, taxonomy and labelled data creation effort by subject matter expert reduces to 2 hours as compared to 2 weeks . For classification, we propose a two level HMCF that performs multi-class classification to identify coarse level-1 topic and leverages NLI based label-aware approach to identify granular level-2 topic. We showcase that HMCF (based on BERT and NLI) a) achieves an absolute improvement of 13% in Top-1 accuracy over single-task non-hierarchical baselines b) learns a generic domain invariant function that can adapt to a constantly evolving taxonomy (open label set) without need of re-training. c) reduces model deployment efforts significantly since it needs only one model that caters to thousands of product categories. ## 1 Introduction Having correct, complete, and consistent information on the detail page is very important to ensure a world-class customer experience. E-commerce customers often refer to the "Customer Questions and Answers" (CQA) section to seek the information that they deem important before buying. World wide, a leading e-commerce website customers ask and questions in the order of millions1 per week before buying. Customers refer to the CQA section primarily to find information that is a) not present on product page or b) is inconsistent across various sections of the product page. Therefore, identifying information gaps on product page can help catalog owners improve the quality of catalog, create new attributes to enrich product page. It also helps product owners design better products, understand customer preferences, and improve the overall customer experience. In this paper, we aim to build a scalable solution that extracts topics from customer questions (CQ). Note: In this problem, we want to extract topics customers are interested in and not the answers to their questions since the objective is to identify information gap on product page using CQs. Further, we wish to identify granular and "actionable" topics. An actionable topic clearly conveys the intent behind the question. Please refer to Table 5 for topic action-ability examples. Identifying topics for CQ poses several practical challenges, especially, at the scale of an ecommerce giant where products are spread across thousands of categories. Constantly evolving taxonomy (open label set): We wish to extract diverse topics from various product categories. The topics are dynamic and keep evolving over time because new products keep launching. Hence, traditional text classification approaches are not applicable for our use case since they work with a fixed and limited set of labels. Further, we usually don't have a pre-defined taxonomy to start with, and we need to define a separate taxonomy for each product category for two reasons: a) we observe that CQs are generally very specific to product categories b) we also observe that granular topic overlap amongst similar or re-1We are not revealing exact numbers to comply with company legal policy. 786 | Customer question | Topic | Action-ability | Actions | |-------------------------------------------|----------------------|------------------|----------------------------------------| | What is seat height? | size | × | can not take action | | Does this sofa set include ottoman too? | pack content | × | can not take action | | Can I place lounger on right of the sofa? | usage | × | can not take action | | What is seat height? | seat height | ✓ | Update "size chart" | | Does this sofa set include ottoman too? | includes ottoman | ✓ | Add pack content info in bullet points | | Can I place lounger on right of the sofa? | right-left placement | ✓ | Update placement images | Table 1: Examples of action-ability of topics lated product categories such as Table, Chair, Sofa is less than 30%. However, it is complex to manually create taxonomies across thousands of product categories from scratch. Labelled data scarcity: Next, it is infeasible to obtain a large amount of manually annotated dataset for thousands of product categories typically required for training deep learning models. High cardinality of label space: We observe that there are hundreds of granular topics per product category in our datasets. We observe (in Table 3) that performance of metric learning or softmax based multi-class classifier degrades with such a high number of classes. To tackle the challenges mentioned above, we introduce two main novel ideas. We introduce a clustering based Taxonomy Creation and Data Labeling (TCDL) module to create taxonomy and labelled data efficiently with very minimal manual supervision. Using this module, taxonomy creation effort by a subject matter expert reduces to 2 hours as compared to 2 weeks. For topic identification, we introduce a novel Hierarchical Multi-task Classification Framework (HMCF), which performs multi-class classification to predict level-1 topic and leverages Natural Language Inference (NLI) to identify granular level-2 topic from question. We show that a model trained using NLI based HMCF a) achieves an absolute improvement of 13% Top-1 accuracy over a singletask non-hierarchical architecture baseline, and b) learns a generic function that can adapt to new product categories and topics with high accuracy without retraining. Rest of the paper is organized as follows. We discuss related work in section 2. In section 3, we explain TCDL module in detail. We discuss HMCF in detail in section 4. Following that, we discuss experiments and results in section 5 and section 6, respectively. Finally, we conclude the paper with our findings in section 7. ## 2 Related Work The lack of availability of training data and a predefined taxonomy is a big challenge in industry. LDA ([Blei et al., 2003]) can be helpful in mining topics from corpus of text and creating taxonomy. Taxogen ([Zhang et al., 2018]) proposed an unsupervised method to derive taxonomy from a large corpus of text. However, these approaches model the text corpus as Bag of Words (BoW) and do not take into account the context of the keywords. There are many models available that model multiclass text classification as a hierarchical classification problem [Tsochantaridis et al., 2005, Sinha et al., 2018]. However, they do not offer flexibility in adding or removing labels without adding extra parameters and retraining on entire dataset. Also, they use deep learning based models which require large amounts of labelled data for training. Recently, few-shot learning based approaches have gained popularity in the NLP domain, particularly to address the challenge of large scale label data availability ([Nichol et al., 2018], [Snell et al., 2017], [Zhang et al., 2020]). However, these approaches do not take label information into account. Recently, BERT ([Devlin et al., 2018]) has shown state-of-the-art performance on many NLP tasks. We use it for both level-1 and level-2 tasks. ## 3 Weakly Supervised Taxonomy And Labelled Data Creation In this section, we describe the taxonomy and label creation approach in detail (refer to Figure 1). ## 3.1 Hierarchical Taxonomy We organize topics into a two level hierarchy for following reasons: a) using hierarchical taxonomy, sellers can consume insights at two different levels of granularity b) topic classification model performance improves with hierarchical taxonomy as compared to flat taxonomy (section 6). Level-1 in taxonomy contains total of 9 generic ![2_image_0.png](2_image_0.png) topics such as "size", "health and safety", etc. They are common for all the product categories. Whereas, level-2 topics are granular and different for each product category. For example, some of the "size" related level-2 topics for "CHAIR" product categories are "arm height", "seat depth" whereas the same for "TABLE" product categories are "table top dimensions", "drawer dimensions". Please refer to Table 6 for more examples. ## 3.2 Taxonomy Creation And Data Labeling Module (Tcdl) We propose a scalable two step approach for defining taxonomy. The steps are: a) cluster CQs b) assign names to each clusters. First, we cluster CQs using their Sentence-BERT embedding [Reimers and Gurevych, 2019]. We perform "agglomerative clustering" primarily because we don't know the number of clusters beforehand. We choose "cosine similarity" with "average" linkage as a similarity criterion and a threshold of 0.2 for merging the clusters. We observe that the clusters obtained using the criteria mentioned are coherent and granular. To assign a topic name to each cluster, we take the top k keywords for each cluster based on their TF-IDF scores. Then, with minimal manual supervision, we can derive granular topic names with guidance from top keywords for each cluster. We organize all the granular topic names at level-2 and manually map it to an appropriate level-1 topic. With this approach, it requires only ∼ 1 − 2 hours of manual supervision as opposed to 2 weeks to come up with taxonomy per product category. At the end of taxonomy creation step, we have a hierarchical taxonomy and level-1, level-2 labels for each cluster. Cluster labels serve as level-1 and level-2 labels for each question within the cluster. This way, labelled data generation for the classification model is totally automated. Finally, to obtain product category p specific taxonomy T p, a granular topic t is added to T p only if there is a question q from p with label t present in the TCDL output. Please refer to Table 7 for the output of the clustering and labeling. ## 4 Hierarchical Multi-Task Classification Framework (Hmcf) 4.1 Problem Definition And Formulation Given a question q from the product category p, we want to map it to l1 and l p 2 . Here, l1 is a level-1 topic generic across all product categories. Whereas, l p 2 is a level-2 topic specific to product category p. With HMCF described in the paper, we can map q to 1-class at each level because CQs typically talk about one topic. However, HMCF is generic and can be extended to multi-label classification at every level with minor modifications. ## 4.2 Framework Details Figure 2 shows the details of the proposed HMCF. We use BERT as the shared input encoder for both tasks. Individual task specific fully connected networks are added on top of BERT. [CLS] token embedding from BERT is used as input to each task specific network. Level-1 topic identification problem is modelled as multi-class classification task since level-1 topics a) are high level, b) don't vary with time, and c) remain the same across product categories. Level-1 network is a fully connected network with 9 output nodes and softmax activation. Level-2 classes change over time and are different across product categories. To handle such challenges, we leverage NLI based architecture for level-2 topic identification task. Given a question q ![3_image_0.png](3_image_0.png) and a granular topic t, the level-2 network predicts 1 if t is an appropriate topic for q and 0 otherwise. Level-2 network is again a fully connected network with one output node and sigmoid activation. We refer to the proposed architecture as BERT-HMCF1-model-NLI throughout the rest of the paper. 4.3 Input preparation We use the BERT tokenizer to prepare input for both tasks. Input for level-1 task is constructed as a concatenation of [CLS], τq, [SEP]. Whereas, input for level-2 task is constructed as a concatenation of [CLS], τq, [SEP], τt, [SEP]. Here, τq and τt are the lists of tokens of q and t obtained from the BERT tokenizer, respectively. Whereas, [CLS] and [SEP] are special tokens from the BERT vocabulary. 4.4 HMCF inference In HMCF, inference happens in two stages (refer to equation[1]). At first, given a question q, level-1 network is used to predict high level topic l1. Then, given product category p and l1, level-2 network is used to predict p specific granular topic l p 2 that maps to l1. To do so, q is paired with t p,l1 i where t p,l1 i ∈ T p,l1, the set of all p specific granular topics that maps to l1. Then, BERT is used to encode (q, tp,l1 i) pairs, and level-2 network is used to obtain s p,l1 i. Here, s p,l1 ican be thought of as score of appropriateness for the (q, tp,l1 i) pair. Finally, l p 2 is chosen as the t p,l1 jfor which s p,l1 jis the maximum. In the equation 1, e x CLS is the [CLS] token embedding from BERT for input x, L is the set of all level-1 topics, M1 and M2 are the level-1 and level-2 task networks, respectively, l1 is level-1 prediction, T p,l1is the set of all p specific level-2 topics that map to l1, n p,l1is the number of topics $e_{CLS}^{q}=$ BERT(q) $$l_{1}=\text{argmax}(\text{M}_{1}(e_{CLS}^{q}))\quad\text{where}l_{1}\in L$$ $e_{CLS}^{q,t_{i}^{p,l_{1}}}=$ BERT($q,t_{i}^{p,l_{1}})\quad\text{where}t_{i}^{p,l_{1}}\in T^{p,l_{1}}$ $$S=\begin{bmatrix}s_{i}^{p,l_{1}}\end{bmatrix}_{i=0}^{i=n^{p,l_{1}}}\quad\text{where}s_{i}^{p,l_{1}}=\text{M}_{2}(e_{CLS}^{q,t_{i}^{p,l_{1}}})$$ $$l_{2}^{p}=t_{j}^{p,l_{1}}\quad\quad\quad\text{where}j=\text{argmax}(S)\tag{1}$$ ## 4.5 Hmcf Training 4.5.1 Training data preparation To prepare training data, we use the output of the TCDL module that contains level-1 topic l1, level-2 topic l p 2 and product category p for every question q. Training data for level-1 task can be obtained by setting l1 as the label for question q. We need to create positive and negative (*q, t*) pairs for the level-2 task. Given q, p, l1 and l p 2 , positive training samples are obtained by pairing q with l p 2 . We create easy negatives by pairing q with a randomly sampled level-2 topic from the set of all level-2 topics available. To generate hard negatives, we sample level-2 topic t′from T p,l1 such that t′ ̸= l p 2 . Here, T p,l1is the set of all p specific level-2 topics that map to l1. ## 4.5.2 Batch Sharing For Training We train networks for both tasks simultaneously. However, each batch contains data only for one task. At every step of training, we randomly sample a task and a batch for the same task for optimization. We optimize task-1 network using standard crossentropy loss and task-2 network using binary crossentropy loss. If a batch corresponds to task k, then only task k specific parameters are optimized. ## 5 Experiments 5.1 Training Setup And Experiments We conduct experiments to evaluate HMCF on various aspects such as a) hierarchical vs. nonhierarchical architecture; b) multi-task vs. singletask modelling; c) suitability of NLI tasks for constantly changing label space; d) zeroshot capabilities; and e) impact of level-1 task performance on overall performance. We use a dataset of customer questions posted on an e-commerce website for our experiments. This dataset contains questions from 122 product categories. Taxonomy for all 122 product categories and training data is prepared using the TCDL module. BERT based HMCF and all the other baselines are trained only on the data from 100 product categories. We denote this dataset as CQ100PCTrain. We use the remaining 22 product categories to test the few-shot capabilities of HMCF. To do so, we separately train models on CQ100PCTrain combined with only k(= 5, 10, 20) samples per level-2 topic from the remaining 22 product categories. To reduce the impact of level-1 errors on overall system performance, we also experiment with following inference strategy. We pair q with all the level-2 topics that map to any of the top-k (k = 1, 2, 3*, . . .*) level-1 topics by output probabilities. Then, topic t with the highest output score for the (*q, t*) pair from level-2 model is chosen as the final level-2 prediction. Since topic identification from CQs is a multiclass classification problem, we choose Top-1 accuracy as a performance criterion for both tasks. We measure the performance of the model on two separate datasets: a) CQ100PCTest and b) CQ22PCTest. CQ100PCTest is a full-shot dataset that contains 15000 manually annotated questions from same 100 product categories that are used in training. CQ22PCTest is a zero-shot dataset that contains 5000 manually annotated questions from the remaining 22 product categories that are not used for training. Refer to Table 2 for detailed data statistics. We use pre-trained bert-base-uncased2as the base encoder to maintain parameter parity among every framework. We fine-tune all models on single Nvidia V100 GPU for 2 epochs using Adam optimizer, learning rate of 2e−5, batch size of 32. 2https://huggingface.co/bert-base-uncased | Dataset | # questions | # unique level-2 labels | |--------------|---------------|---------------------------| | CQ100PCTrain | 81,042 | 2,031 | | CQ100PCTest | 15,000 | 2,031 | | CQ22PCTest | 5,000 | 246 | Table 2: Data statistics ## 5.2 Baselines BERT-Softmax is a single-task, non-hierarchical BERT model with a linear layer and softmax activation on top of it. Output layer contains 2031 nodes (same as no. of level-2 topics in the dataset). BERT-cos is a single-task, non-hierarchical model. In this model, we encode question q and topic t separately using BERT and obtain 64 dimensional projection using a learnable linear layer. CosineLoss [eq. 2] is used to train the network. CosineLoss(*x, y*) = (1 − cos(*x, y*), if y = 1 max(0, cos(*x, y*)) if y = −1 (2) BERT-NLI is a single-task, non-hierarchical NLI model. It takes a question q and topic t as input. Expected output is 1 if t is an appropriate topic for q and 0 otherwise. During inference, we don't use level-1 information and pair q with all the level-2 topics for the product category p. BERT-HCF-2-model-NLI differs from BERTHMCF-1-model-NLI in the following aspects: HCF stands for hierarchical classification framework. For HCF, we train two different models for each task: one BERT model as a multi-class classifier for level-1 and another BERT model as an NLI based binary classification for level-2. Note that BERT-HCF-2-model-NLI is a hierarchical architecture with single-task models for each task. BERT-HMCF-1-model-cos and BERT-HCF-2model-cos are the architectures equivalent to BERT-HMCF-1-model-NLI and BERT-HCF-2model-NLI, respectively. The only difference is that level-2 task is trained using CosineLoss between 64 dimensional linear projections of individual [CLS] embeddings of question and topic. ## 6 Results In this section, we discuss the results of our experiments in detail. We use the Top-1 accuracy metric in all the experiments. Below is a summary of key observations made from the experiment results. ## 6.1 Key Observations We capture the detailed results of all experiments in Table 3 and make the following observations. | CQ100PCTest | CQ22PCTest | | | | | | |-----------------------|----------------------|-----------------------|-------------------|-------------------|------|------| | Architecture | level-1 | level-2 | zero-shot level-1 | zero-shot level-2 | | | | BERT-Softmax | - | 0.59 | - | - | | | | Non-hierarchical | single-task | BERT-cos | - | 0.59 | - | 0.58 | | BERT-NLI | - | 0.70 | - | 0.67 | | | | single-task | BERT-HCF-cos-2-model | 0.88 | 0.68 | 0.84 | 0.63 | | | BERT-HCF-NLI-2-model | 0.89 | 0.79 | 0.83 | 0.75 | | | | Hierarchical | multi-task | BERT-HMCF-cos-1-model | 0.9 | 0.69 | 0.86 | 0.59 | | BERT-HMCF-NLI-1-model | 0.92 | 0.83 | 0.87 | 0.78 | | | Hierarchical framework performs better than non-hierarchical framework. We observe from Table 3 that models trained in hierarchical framework tend to perform better than equivalent nonhierarchical models. For example, BERT-HMCF1-model-NLI yields 13% and 11% absolute improvement in Top-1 accuracy (for level-2 task) over BERT-NLI on the CQ100PCTest and CQ22PCTest datasets, respectively. Superior performance of hierarchical framework can be attributed to the fact that a) level-1 prediction helps level-2 model narrow down focus on limited set of classes, leading to better performance b) models can be trained on hard negative examples under hierarchical framework because of the availability of level-1 information. Multi-task model outperforms single-task model. We can see from Table 3 that, BERTHMCF-1-model-NLI achieves a 4% absolute accuracy improvement over BERT-HCF-2-model-NLI for level-2 task on the CQ100PCTest dataset. This suggests that weight sharing between tasks is helpful in learning more general representations that are useful across both tasks. NLI framework is suitable for level-2 identification. Again, we can see from Table 3 that NLI based architectures outperform equivalent cosine similarity based architectures. The primary reason for this finding can be attributed to the fact that [CLS] embeddings in NLI architecture are obtained by computing attention over both question and topic tokens together. Hence, the [CLS] embeddings obtained are rich in representation as compared to individual question and topic embeddings computed in a cosine similarity based framework. NLI based architecture demonstrates excellent generalization capabilities. We observe from Table 4 that BERT-HMCF-1-model-NLI achieves 78% accuracy on the dataset of 22 product categories on which the model was not trained. Further, with just 10 samples per label, accuracy reaches 82% which is almost at par with full-shot model performance. We conclude that NLI architecture can be thought of as computing a general similarity metric between topic and question. Since this task is domain agnostic, the model can easily adapt to out-of-domain data. | k-shot setting | level-1 | level-2 | |------------------|-----------|-----------| | 0-shot | 0.87 | 0.78 | | 5-shot | 0.89 | 0.81 | | 10-shot | 0.89 | 0.82 | | 20-shot | 0.91 | 0.84 | Table 4: BERT-HMCF-1-model-NLI performance on ![5_image_0.png](5_image_0.png) CQ22PCTest (22 product category dataset) in various few-shot scenarios Using Top-k **level-1 predictions for level-2 inference reduces the impact of level-1 error on** overall performance. Figure 3 demonstrates level2 accuracy with respect to k, where k is the number of top-k level-1 predictions used for level-2 inference. Level-2 performance increases from k = 1 to k = 3 monotonically. The performance reduces from k = 4 onwards, mainly because the model has to differentiate between more topics as k increases. ## 7 Conclusion In this paper, we present HMCF, a hierarchical multi-task classification framework to identify granular topics from customer questions. Through systematic studies, we showcase that NLI based HMCF is more appropriate for our problem as compared to single-task or non-hierarchical architectures and yields 13% absolute improvement in Top1 accuracy over single-task non-hierarchical baselines. We also demonstrate that NLI based HMCF generalizes well on other domains since it learns a domain invariant topic and question similarity metric. We also propose a top-k level-1 predictions based inference strategy for level-2 task to reduce the impact of level-1 model errors on overall performance. Further, with the TCDL module proposed in the paper, taxonomy and labelled data creation efforts reduce significantly. We deployed single BERT-HMCF-1-model-NLI to production for 600 product categories and use it to provide actionable insights to the selling partners. Selling partners and brand owners make corrections to the product page or create new attributes to enrich the detail page. Our business teams consume the output to measure the impact of enrichment on product page using the purchase inquiry rate (PIR) metric, the % questions asked WoW. ## References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. *J. Mach. Learn. Res.*, 3 (null):993–1022, mar 2003. ISSN 1532-4435. Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian M. Sadler, Michelle Vanni, and Jiawei Han. Taxogen: Unsupervised topic taxonomy construction by adaptive term embedding and clustering. *CoRR*, abs/1812.09551, 2018. URL http://arxiv.org/abs/1812.09551. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. *Journal of Machine Learning Research*, 6: 1453–1484, 09 2005. Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, and Derek Ruths. A hierarchical neural attention-based text classifier. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 817–823, Brussels, Belgium, OctoberNovember 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1094. URL https: //aclanthology.org/D18-1094. Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms, 2018. URL https://arxiv.org/abs/1803.02999. Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning, 2017. URL https://arxiv.org/abs/1703.05175. Jian-Guo Zhang, Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu, Yao Wan, Philip S. Yu, Richard Socher, and Caiming Xiong. Discriminative nearest neighbor few-shot intent detection by transferring natural language inference, 2020. URL https: //arxiv.org/abs/2010.13009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018. URL https://arxiv.org/abs/1810. 04805. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks, 2019. URL https://arxiv.org/abs/1908.10084. ## A Discussion On Output Of Bert-Hmcf-Nli-1-Model We can observe from Table 5 that the level-2 topics predicted are very granular and actionable. We also did error analysis to understand error patterns. The major error contribution comes from errors made by level-1 model. If the level-1 model makes an error, the level-2 prediction will be incorrect with probability 1. We observe that errors made by level-1 contribute to ∼ 60% of overall errors. Next, we also observe that, ∼ 25% of the errors are contributed by questions with multiple intents (example 7,8 in Table 5). In such cases, even though the model predicts one correct topic, it misses predicting another topic, and we count it as an error. This error occurs due to an architectural limitation because our architecture only predicts one topic. There are very few cases where we observe that the model learns to put undue emphasis on certain keywords ("back" in topic and question, example 9 in Table 5) and gives the highest score to the wrong topic. ## B Taxonomy And Tcdl Module Output | id | Customer question | Level-1 | Level-2 | is level-2 correct? | correct topic | |------|------------------------------------------------------------------|-----------------------|--------------------------|-----------------------|------------------------------------------------------------------| | 1 | What is seat height? | size | seat height | yes | - | | 2 | Does this sofa set include ottoman too? | pack/quantity | includes ottoman | yes | - | | 3 | Can I use it outdoor? | usage | indoor or outdoor usage | yes | - | | 4 | Can this chair serve as a study chair for children? | compatibility | suitable for studying | yes | - | | 5 | does it rust? | usage | rustproof | yes | - | | 6 | is this a set of 2? | pack/quantity | quantity | yes | - | | 7 | What is the height? what is the width? | size | seat height | no | ("size", "seat height"), ("size", "seat width") | | 8 | Is the table green color? Also, what is the size of white table? | size | table dimensions | no | ("product specification", "color"), ("size", "table dimensions") | | 9 | what is the inscription on the back? | product specification | front/back specification | no | ("product specication", "back inscription") | | 10 | can I place 10kg oven on this table? | usage | placement | no | ("product specification", "weight limit") | Table 5: Example of customer questions and BERT-HMCF-1-model-NLI output | Level-1 | Level-2 | |-----------------------|----------------------------------------------------------------------------------| | size | [arm height, foot rest height, depth without cushion, . . . ] | | product specification | [color, reclining angle, rocking feature, . . . ] | | usage | [adjustable lumber, foldable, can be used outdoors, . . . ] | | material | [cushion material, leather type, arm material, . . . ] | | compatibility | [suitable for certain heights, suitable for kids, suitable for studying, . . . ] | Table 6: Example of taxonomy for CHAIR category | Questions in a cluster | Top-3 keywords | Level-2 topic | Level-1 topic | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|-----------------|-----------------------|-------| | what is the height of seat from the floor?, how high is the seat from the ground?, what is seat height? | seat, | height, | seat height | size | | floor | | | | | | Can this be used by men that weight 250pds...Top and bottom?, What is the weight limit?, what's the weight capacity for this? I'm an adult of about 300 pounds., What is the weight limit? Can an adult use it? | weight, limit, capacity | weight limit | product specification | | | Can its height be adjusted?, How high can this adjust to?, Is it adjustable to height or is it one height? limit?, Does it have different adjustable hights and what is the lowest setting?, Is height adjustable | adjust, | height, | height adjustability | usgae | | limit | | | | | Table 7: Example of output of taxonomy creation and data labeling module
sharma-etal-2023-automated
Automated Digitization of Unstructured Medical Prescriptions
https://aclanthology.org/2023.acl-industry.76
Automated digitization of prescription images is a critical prerequisite to scale digital healthcare services such as online pharmacies. This is challenging in emerging markets since prescriptions are not digitized at source and patients lack the medical expertise to interpret prescriptions to place orders. In this paper, we present prescription digitization system for online medicine ordering built with minimal supervision. Our system uses a modular pipeline comprising a mix of ML and rule-based components for (a) image to text extraction, (b) segmentation into blocks and medication items, (c) medication attribute extraction, (d) matching against medicine catalog, and (e) shopping cart building. Our approach efficiently utilizes multiple signals like layout, medical ontologies, and semantic embeddings via LayoutLMv2 model to yield substantial improvement relative to strong baselines on medication attribute extraction. Our pipeline achieves +5.9{\%} gain in precision@3 and +5.6{\%} in recall@3 over catalog-based fuzzy matching baseline for shopping cart building for printed prescriptions.
# Automated Digitization Of Unstructured Medical Prescriptions Megha Sharma∗ Amazon Tushar Vatsal∗ Amazon Srujana Merugu Amazon Aruna Rajan† Google ## Abstract Automated digitization of prescription images is a critical prerequisite to scale digital healthcare services such as online pharmacies. This is challenging in emerging markets since prescriptions are not digitized at source and patients lack the medical expertise to interpret prescriptions to place orders. In this paper, we present prescription digitization system for online medicine ordering built with minimal supervision. Our system uses a modular pipeline comprising a mix of ML and rule-based components for (a) image to text extraction, (b) segmentation into blocks and medication items, (c) medication attribute extraction, (d) matching against medicine catalog, and (e) shopping cart building. Our approach efficiently utilizes multiple signals like layout, medical ontologies, and semantic embeddings via LayoutLMv2 model to yield substantial improvement relative to strong baselines on medication attribute extraction. Our pipeline achieves +5.9% gain in precision@3 and +5.6% in recall@3 over catalog-based fuzzy matching baseline for shopping cart building for printed prescriptions. ## 1 Introduction In recent years, prompted by the COVID pandemic, there has been a rise in the adoption of online pharmaceutical services leading to improved access to medications and health outcomes. However, in emerging markets such as India, online pharmacy ordering continues to be challenging since prescriptions tend to be paper-based, unstructured and often, handwritten, which makes digitization a vital prerequisite. For in-store purchases, customers follow a simple process of presenting a prescription to the store pharmacist who interprets it and fulfills the order. Current e-commerce purchase process, however, imposes a significant cognitive load on customers since they have to explicitly ∗Equal contribution †Work done while at Amazon specify the medicines. This process is onerous for the customers due to their (a) unfamiliarity with the ordering process, (b) difficulty in understanding prescriptions, and (c) lack of expertise to interpret medical acronyms and identify substitute medicines. Further, most online pharmacies have a post-cart creation workflow where customers upload the prescription to be verified by a remote pharmacist. Lack of pharmacist capacity often leads to long wait time making the process unscalable. Therefore, an automated system that converts prescription images to a digitized form to facilitate search-less shopping is essential for the success of online pharmacies. In particular, we need to extract the medical advice section which contains a list of medication items, each of which is a record of multiple fields such as BRAND-NAME. Challenges. Addressing this problem is non-trivial due to multiple reasons shown in Figure 3a: (a) variability in prescription image quality, background, and orientation, (b) diversity of layouts and doctor styles, (c) high prevalence of typos that create confusion between similar items (e.g., Fibrodone and Firodone), (d) specialized vocabulary of regional prescriptions, and (e) the need for converting dosage-specific instructions to a precise product order. Additionally, there are limited labeled prescriptions due to the high manual effort it entails. Related work. While there have been significant advances in document AI [6, 15, 19] and information extraction [29, 21, 13] techniques, most of these methods are effective only on images of wellformatted documents such as invoices. Besides, these generic methods require significant supervision and are not sufficiently modular to support a phased automation of the prescription processing workflow. Recent work on digitizing medical prescriptions [27] is focused on using named entity recognition (NER) methods for medication attribute detection, but these models perform poorly on non-US prescriptions due to vocabulary gaps 794 ![1_image_0.png](1_image_0.png) and do not utilize the layout or catalog information. We present additional related work in Appendix B. Contributions. In this paper, we present a study on automated digitisation of prescription images with printed content which covers design, data, modeling and evaluation aspects. We discuss experimental results for our emerging modular pipeline which comprises a mix of ML and rule-based components for (a) image to text extraction and normalization (b) segmentation into blocks, medication items and extraction of medication attributes, (c) matching against pharmacy catalog, and (d) shopping cart building. We detail how our approach efficiently combines layout signal, medical ontologies, and semantic embeddings via LayoutLMv2 model to yield substantial improvement relative to strong baselines on medication attribute extraction, and results in +5.9% and +5.6% gain in precision@3 and recall@3 over catalog-based fuzzy matching baseline for printed prescriptions. We discuss key learnings relevant for low data regime document AI systems in addition to presenting component-wise efficacy of our pipeline and results from ongoing experiments (Appendix A.4.2) to highlight future directions. We present safety aspects in Section 9. ## 2 Prescription Digitization Problem Given a prescription image, a natural choice for digitization is in terms of conversion to a structured prescription object as per a global standard such as the Fast Healthcare Interoperability Resources (FHIR) framework [3]. Since our objective is to create a shopping cart for automated medicine ordering we focus on populating the relevant fields only in the FHIR prescription schema (Table 5). To accommodate the nuances of regional medications, we define each *medicine* in the pharmacy catalog as a unique tuple of BRAND-NAME or GENERIC-NAME1, FORM, INGREDIENT and STRENGTH. A unique pair of a medicine and package details corresponds to a stock keeping unit (SKU). Figure 1 depicts the various stages of processing a prescription (denoted by hA, hD, hO) that results in the successive creation of following entities: (a) Annotated Prescription is a visually rich document (VRD) comprising labeled rectangular bounding boxes (BBs). Each BB is associated with text and a list of annotations, which include the start-end offsets and labels corresponding to medication attributes, item boundaries, and block type, *(b) Digitized Prescription* is a structured object with canonical entries following the FHIR-based prescription schema, *(c) Pharmacy Order* is a list of SKUs from the prescription along with the recommended quantities. The conversion to a pharmacy order (hO) can be enabled via a deterministic lookup using a medicine-SKU map if the medication codes in the digitized prescription are from the catalog. Hence, we focus on the non-trivial transformations hA and hD that entail a data-driven approach. Let P = {pi} N i=1 denote the set of the available prescription images for training. For the i th prescription pi, let (ai, di, oi) denote the human annotated prescription, digitized prescription, and pharmacy order obtained from expert pharmacists, i.e., ai = hA(pi), di = hD(ai), oi = hO(di). Typically, pharmacists directly create or validate pharmacy orders from a prescription image without any record of the intermediate annotation and digitization. Since these prescription-order pairs are inadequate for an end-to-end neural model, we explicitly gather supervision for the intermediate stages for a subset of the prescriptions to enable a pipelined approach. Let z A i , zD i , zO idenote binary indicators of the availability of ai, di, oi respectively. Further L A(·, ·), LD(·, ·), LO(·, ·) be suitable loss functions for comparison of pairs of candidate annotated versions, digitized versions, and orders corresponding to a prescription such as the accuracy of annotations, matching with canonical entities, and the constructed order respectively as shown in Table 2. Then, the training objective 1Generic names are globally approved, e.g., paracetamol, while brand names are manufacturer given e.g., Calpol. is to learn mappings hˆA and hˆD that produce high fidelity reconstructions of the processed versions of the prescription and can be viewed as a loss minimization: $$\begin{array}{c}{{\operatorname*{min}_{\hat{h}_{A},\hat{h}_{D}}\Big[\sum_{i|z_{i}^{A}=1}L_{A}(a_{i},\hat{h}_{A}(p_{i}))}}\\ {{+\sum_{i|z_{i}^{D}=1}L_{D}(d_{i},(\hat{h}_{A}\circ\hat{h}_{D})(p_{i}))}}\\ {{+\sum_{i|z_{i}^{O}=1}L_{O}(o_{i},(\hat{h}_{A}\circ\hat{h}_{D}\circ h_{O})(p_{i}))\Big]}}\end{array}$$ ## Given A New Prescription, The Learned Mappings (HˆA, HˆD) Along With Ho Yield A Pharmacy Order. 3 Solution Design 3.1 Design Choices We discuss the key tenets and design choices of our prescription digitization approach (Figure 3b). Modularity. Supporting phased automation of user-driven cart building and pharmacist-driven validation workflows entailed a modular pipeline. Solution choice dependent on input signals. Limited labeled prescription data coupled with access to medical ontologies made it prudent to choose a hybrid combination of rule-based and ML modules instead of an end-to-end deep neural model. Interoperability. The need to interface with other healthcare systems led us to choose a data representation based on global FHIR standards. Extension over reinvention. Fast and scalable implementation required use of existing solutions for sub-problems wherever acceptable and focusing on exploration of the harder sub-problems. ## 3.2 Components We describe components of Figure 3b below. Text extraction & VRD Normalization. First, we identify OCR bounding boxes (BBs) and extract the text from these BBs. Then we perform rotation and background cropping, using the position coordinates of BBs, to create normalized VRDs with more homogeneous layouts as shown in Figure 2. ![2_image_0.png](2_image_0.png) Entity Annotation. Annotating BBs comprises three tasks corresponding to stage (b) of Figure 1: (a) detecting block(s) containing medical advice of doctor, (b) chunking of words, within medical advice block, related to medication(s) into item(s), and (c) extracting medication attributes such as brand name, duration of consumption from an item. Though a joint model that optimizes Pi|zA i =1 LA(ai, hˆA(pi)) to simultaneously detect blocks, medication items, and attributes seems like a natural choice, it is prohibitive due to the constraints on amount of supervision, computational effort, and limits on context size of NLP models (usually 512 tokens). We simplify this problem by solving the sub-tasks in the order (a → c → b). Advice block detection reduces sequence length (as shown in figure 4) permitting transformer-based encodings and increasing precision for later tasks. In this task, we construct latent representation of the BBs based on position, semantics, and membership in medical ontologies and learn a supervised classification model to predict whether a BB contains medical advice or not. We perform step (c) and (b) only on BBs predicted as advice blocks. For medication attributes, we label each token in advice BB using our NER model into one of 7 classes (DURATION, FORM, FREQUENCY, INGREDIENT, ITEM-MARKER, BRAND-NAME, and STRENGTH). Sequence of NER predictions are fed into our heuristic algorithm for medication item chunking that leverages relative positions of BRAND-NAME, STRENGTH and FORM tokens. **Matching** and Canonicalization. The next step (hˆD) is to map each annotated medication item in the prescription (e.g., T.[FORM] Crocin[BRAND-NAME], 5 ml[STRENGTH]) to a medicine ID in the pharmacy catalog using extracted attributes. For this we use our Pharmacy product catalog as a reference. This catalog contains all medicine products listed on our website and each product is described by a set of attributes such as BRAND-NAME and FORM. We adopt a two-stage approach comprising: (a) identifying candidates by fuzzy matching predicted BRAND-NAME with that in catalog, (b) computing a match score based on FORM and STRENGTH to identify the best matching medicine ID using either a rule-based or an ML classifier. Cart Building. The final step (hO) is to construct the pharmacy order, i.e., list of required SKUs and their quantities. To enable this, the standard dosage amount of SKU is computed during catalog creation, e.g., 3 packs of 30 ml bottle maps to 90 ml. From the digitized prescription, total recommended dosage amount can be computed from dosage duration, daily dosage pattern and units to be consumed at a time. Appropriate SKU and its quantity can be ## 4 Data Collection Due to the sensitive nature of prescriptions and recent emergence of our medicine ordering application, a public dataset of unstructured prescription images does not exist to the best of our knowledge. External benchmarks such as [5] only contain clean text without any layout information. Hence, we use a proprietary dataset of 1359 Indian prescriptions paired with (fully or partial) digitized orders, delinked from customer IDs. These are mostly or fully printed prescriptions and have been validated by our in-house experts. The prescription images are modified as follows prior to modeling. AWS Textract, which is security certified for critical data, is used to extract text from images. The obtained text is then run through AWS Comprehend to detect personally identifiable information such as patient/doctor names, phone numbers and then the corresponding OCR bounding boxes are grayed out. For a subset of prescriptions, we procured in-house human annotations for supervised training of all components. Ground-truth text, BBs for medical advice blocks, labeled text spans for medication attributes, as well labels for pairs of candidate and ground truth SKUs for medication matching were annotated in the prescription image by the annotators. More details on the annotation tasks are given in Table 6. Table 1 lists details of the training and evaluating splits for various components. Given the expensive labeling effort, this data size is realistic for early-stage specialized document AI systems. ## 5 Experimental Results We present our evaluation method and results on the efficacy of the full system and various components with focus on medication attribute extraction. ## 5.1 Evaluation Methodology Practical systems need to be evaluated during development (offline metrics) and post deployment (online metrics). Table 1 lists our offline evaluation metrics. Most of these are self-explanatory except Brand match which is the percentage of medicine brand names ordered by the customers in the extracted text and indicates the medical text extraction efficacy. The online metrics of our system (not reported for proprietary reasons) depend on whether the digitization is integrated into the pharmacist processing flow or the customer-facing UI. These include rate of correction of automated cart suggestions, reduction in cart-building time, reduction in order rejections during verification stage as well as business metrics on the order volume. ## 5.2 Component-Wise Efficacy Table 2 lists the metrics of various components of our digitization pipeline, which we discuss below. Text Extraction. Due to limited supervision, we use pretrained off-the-shelf solutions. AWS Textract is our preferred choice as it provides a higher brand match (+7%) than AWS Rekognition as the latter has limit on the number of extractable words. Advice Block detection. To reduce complexity for downstream tasks, we first detect medical advice blocks. We employ a two-stage solution (see Figure 4) of (a) clustering BBs using K-means on their positional coordinates, and (b) classifying each cluster as advice block or not using XGBoost [8] classifier trained on cluster position, and fractions of medical and printed words. Lastly, adjacent advice blocks are merged. This method can be extended to other block types (e.g., header, footer) using a multi-class classifier and block-type indicators. Our solution results in an operational point with 94.8% recall, 88.1% precision, reduction in block size (Figure 5) and mostly homogeneous clusters (homogeneity score: 0.857). Common errors occur due to: (a) sparse text that cannot use local semantic context well leading to false positives, and (b) long lines that are ideally a single cluster but split because of the high divergence in the horizontal dimension. Medication Item Chunking. We exploit the observation that attributes of a medication are contiguous with BRAND NAME preceding STRENGTH and the ordering relative to FORM being flexible. Let tk be the k th detected BRAND NAME token. For each tk, we construct up to two candidate medications with brand based on tk, STRENGTH based on the closest STRENGTH token to tk in the span (tk, tk+1), and FORM derived from the FORM tokens closest to tk on either side in the spans (tk−1, tk) and (tk−1, tk). Our approach yields high accuracy (97.2%) obviating the need for an ML system. Matching and Canonicalization. As discussed in Section 3.2, we employ a two-stage approach of filtering and ranking using match score. For match score computation, we consider two methods using the same attribute fuzzy scores as inputs: 1) rule- ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) based one with heuristics for handling missing values and 2) XGBoost classifier that predicts whether a chunked item matches a candidate SKU using the same features as rule-based one except heuristics. Empirically, we see that the former approach yields +18% gain over the latter in precision@1 of the top matching item. Note that precision@1 is the same as recall@1 due to stand-alone evaluation. ## 5.3 Medication Attribute Extraction Table 3 shows performance of various approaches based on multiple input signals: **(a) Catalog Features:** These include membership scores of tokens with respect to dictionaries of BRAND-NAME, INGREDIENT, STRENGTH (e.g., mg) and FORM (e.g., tablet) created from the catalog, **(b) Semantic features:** These include contextual text embeddings derived from transformer models such as BERT [10] and MedBERT [25] pretrained on Wikipedia and PubMed respectively, **(c) Layout** features: Since the layout provides extra information, e.g., text in the middle is usually medical advice, we use LayoutLM [32] and LayoutLMv2 [31] models, which have multi-modal Transformer architecture as backbone and utilize layout, visual, and textual features to learn cross-modal interactions, and **(d) Collective labeling:** We use the linear conditional random field (CRF) loss to exploit relationships amongst labels, e.g. BRAND-NAME often lies between FORM and STRENGTH. Note than in Table 3, the token level metrics are weighted with token length so that errors on small tokens are less penalized and OTHER tokens are excluded as these are not critical for the application. From the results, we observe that XGBoost trained with catalog features performs comparable to custom Comprehend fine-tuned on our data illustrating the importance of catalog signal. While BERT-based models using semantic features further improve the performance, the best accuracy is seen when we incorporate layout features (LayoutLMV2 variants) as well. Note that models such as Comprehend Medical and MedBERT are not suitable for our problem as these are not trained on the Indian medicine vocabulary. Ablation Studies. To evaluate the efficacy of the various signals as well as modeling sequential dependencies via CRF, we conducted ablation studies. Since LayoutLMv2 already uses semantic and layout features, we added collective labeling (LayoutLMv2 + CRF) and catalog features (LayoutLMv2 + CF) separately and in a combined setting (LayoutLMv2 + CF + CRF). We note from Table 3 that performance only changes marginally. Similar behavior is observed when using BERTvariants indicating that catalog and collective labeling are subsumed by the semantic and layout encoding. Good performance of XGBoost + catalog features variant points to presence of non-linear interactions and value of catalog signal. Further studies (Appendix A.4.2) indicate that the performance depends on the quality and diversity of supervision more than than the quantity pointing to | Task | Data split | Metrics | Approaches | |---------------------------------|-----------------|---------------------------------|-----------------------| | Text extraction | (-, -, 10) | Brand match % | Textract, Rekognition | | Block detection | (44, 5, 13) | Precision (P), Recall (R) | K-means + XGBoost | | Medication attribute extraction | (977, 190, 192) | Token & entity level P, R, F1 3 | Refer Table 3 | | Medication Item Chunking | (977, 190, 192) | Accuracy | Rule based | | Matching & canonicalization | (272, 46, 58) | Precision@k (P@k) | Rule based, XGBoost | Table 1: Details of dataset (train,val,test splits), evaluation metrics, and approaches for sub-tasks. Table 3: Performance of various NER methods on medication attribute extraction. | Method | Brand match% | Method | Precision | Recall | Method | P@1 | Method | Accuracy | |--------------------------------------------------------------------------------------------|-------------------|----------|-------------|----------|----------|--------|----------|------------| | Textract | 56.17 | | | | | | | | | Rekognition | 49.38 | | | | | | | | | (a) Text Extraction | K-means + XGBoost | 0.881 | 0.948 | | | | | | | (b) Block Detection | Rule based | 0.945 | | | | | | | | XGBoost | 0.765 | | | | | | | | | (c) Matching | Rule based | 0.972 | | | | | | | | (d) Medication Chunking | | | | | | | | | | Table 2: Efficacy of various stages of pipeline excluding medication attribute extraction. | | | | | | | | | | Type | Model | Token | Token | Token | Entity | Entity | Entity | | | Precision | Recall | F1 | Precision | Recall | F1 | | | | | AWS Solutions | Custom Comprehend | 0.955 | 0.882 | 0.915 | 0.774 | 0.790 | 0.782 | | | Catalog Features (CF) | CF + XGBoost | 0.973 | 0.870 | 0.917 | 0.766 | 0.780 | 0.773 | | | Semantic Features | BERT | 0.975 | 0.913 | 0.942 | 0.802 | 0.829 | 0.815 | | | MedBERT | 0.974 | 0.893 | 0.931 | 0.802 | 0.811 | 0.806 | | | | Layout Features | LayoutLM (LLM) | 0.981 | 0.918 | 0.948 | 0.826 | 0.834 | 0.830 | | | LLMv2 | 0.983 | 0.926 | 0.953 | 0.829 | 0.842 | 0.835 | | | | LLMv2 + CF | 0.981 | 0.915 | 0.946 | 0.835 | 0.837 | 0.836 | | | | Collective Labeling | BERT + CRF | 0.974 | 0.903 | 0.936 | 0.800 | 0.816 | 0.808 | | | BERT + CF + CRF | 0.974 | 0.896 | 0.932 | 0.808 | 0.814 | 0.811 | | | | LLMv2 + CRF | 0.982 | 0.921 | 0.950 | 0.835 | 0.840 | 0.838 | | | | LLMv2 + CF + CRF | 0.983 | 0.921 | 0.950 | 0.830 | 0.835 | 0.832 | | | the benefits of using active learning approaches. Error Diagnosis. Table 4 presents an error diagnosis of our best model (LayoutLMv2) and areas of improvement such as deducing labels from context (e.g. "tablet once a day" → Frequency). Figure 9 presents the confusion matrix of different medication attribute classes. ## 5.4 Overall Cart Building Efficacy We evaluate the overall pipeline on a test set of 179 orders (71% are partially digitized) consisting of 200 digitized medication items. We predict top K (K=3) SKUs for each medicine identified in the prescription image for customer safety and evaluate our approaches on precision@3 (i.e., fraction of predicted being in the ground truth orders) and recall@3 (i.e., fraction of actual ordered medications being detected). Since precision estimate is based on partial orders, it is pessimistic. The baseline method performs fuzzy matching of attributes (e.g., BRAND-NAME, FORM) of catalog items with ngrams from complete prescription text and selects top K SKUs for each prescribed medicine. Our proposed approach combines the best version of each component from Section 3.2 and gets **+5.9%** in precision@3 and **+5.6%** in recall@3 over the baseline. Error diagnosis. Primary gaps in our approach include: (a) Text extraction errors, e.g., capsule extracted as "apsule" resulting in misclassification as form; (b) Limited semantic understanding of the model, e.g., "once a month" denotes Frequency but was predicted as Duration; (c) Token not exclusively associated with a label, e.g., "syrup" is usually Form, but "corn syrup" is an Ingredient, and (d) Minor variations in medication attributes (e.g., "LosarH" vs "LosarCH") which can be handled by including INGREDIENT during matching. ## 6 Learnings Below are our key learnings on building document AI systems for low data regime: Annotation design is critical. Annotation tasks (drawing BBs, text chunking) should be wellspecified with low cognitive load and include all the Actual ❵❵❵❵❵❵❵ Predicted ITEM-MARKERINGREDIENT FORM DURATIONBRAND-NAMESTRENGTH FREQUENCY OTHER ITEM-MARKER 6 . TAB INGREDIENT Motia 3 Corn Syrup Pregabalin Para Sucphate FORMctab duonemerapsule T-FIL ointment at bedtime tab after breakfast DURATIONtues / thurs / sat BRAND-NAME2 ClonazepamThreptin Diskettes VOGS GM Rient OD STRENGTH100 billion sporesSPF 50 10 gm FREQUENCY 1 unit tablet once a day once a for month 2 TSF bed Ativan time OTHER 1 . D-Rise Glargin composition Insulin1 tab oral Continue Bioderma Sebium vertin tab 16mg 1-0-1 single dose | ❵ ❵❵❵❵❵❵❵ Actual BRAND-NAME | 2 | Clon | |-------------------------------|-------------|-------------------| | azepam | Threptin | | | STRENGTH | 100 billion | | | FREQUENCY | 1 unit | tablet once a day | | Glargin | | | | OTHER | 1 . D-Rise | | relevant input (e.g., raw images) to avoid cascading errors. This is especially true for annotations on VRD output from OCR which could itself be erroneous. Building an annotation UI that leverages existing models but allows for manual corrections as part of a semi-automated workflow is an ideal strategy for progressive improvement. Divide and conquer. Despite the ubiquity of endend neural models, it is vital to choose a solution approach based on application constraints, e.g., data limitations, the need for modularity to support phased development and audibility. We adopted a divide-and-conquer approach by partitioning our problem into sub-tasks which could be solved separately using domain knowledge where possible. Our multi-stage solution is extensible and reusable across different workflows and data segments. Model and problem complexity should match. Ideal performance is obtained when complexity of approach matches that of the problem conditioned on available data and domain knowledge. We noticed in our case that richer ML models were comparable or under-performed simpler ML models and domain heuristic-based approaches in medicine chunking and matching tasks due to less data. ## 7 Concluding Remarks Prescription digitization is a critical enabler of online pharmacy services. We present a holistic, modular approach to address this problem in a low data regime using hybrid ML and rule-based components. Our approach uses layout signals, medical ontologies, sequential dependencies, and semantic embeddings to yield significant improvement over baselines and good performance on unstructured printed prescriptions. Ongoing directions include using active learning to judiciously label data (section A.4.2), pseudo labeling of partially digitized orders and digitizing handwritten prescriptions. ## 8 Limitations Our prescription digitization approach has a few limitations but is still effective for a broad enough application domain and permits future enhancements that address these limitations. First, our system uses an off-the-shelf text extraction tool (AWS Textract) that provides accurate extractions on printed prescriptions but has variable performance on hand written data depending on the legibility of the handwriting. In future, we plan to build a specialized extraction model trained to recognise medical practitioner's handwriting to replace AWS TextExtract. Further, multiple components in our approach (e.g., attribute extraction) have been trained on primarily English transcriptions. Extension to other language prescriptions requires access to medical vocabulary and training data in those languages. Note that AWS Textract supports multiple languages and can be readily paired with an automated translator to convert the content to English. We did not consider this option since multilingual prescriptions in India tend to have mixed content with medications written in English itself. Lastly, the performance of multiple tasks such as advice block detection, medication attribute extraction and matching-canonicalization depends on the coverage of the available medical catalog. ## 9 Safety And Ethics Statement Our motivation is to improve access and affordability to online pharmaceutical services in emerging markets such as India through accurate and easy digitization of medical prescriptions. Given the sensitive nature of medical prescriptions and the associated health impact, it was critical to pay attention to multiple aspects that we discuss below: Secure and Privacy-safe Data Collection: Privacy of customer data is paramount to us. Hence, prior to modeling, we remove customer, facility and practitioner information by obscuring the regions containing personally identifiable fields such as names, phone numbers, and addresses, which are identified using security-certified AWS services (AWS Comprehend, AWS Textract). Model Bias: A key limitation of the existing medical NER models is their poor performance on nonUS and EU prescriptions due to bias in the training data, which is almost exclusively based on USEU centric medical content and vocabulary. In our approach, we have deliberately chosen to have explicit dependence on aspects that vary across geographical regions (e.g., medical catalog), which enhances the applicability of our approach. To further limit the model bias and minimize distributional differences between training and production settings, we have trained our models on prescription images that are randomly sampled from customer uploads. These often include low resolution and improperly positioned images. In future, as the scope of deployment changes, we plan to periodically retrain the model with training images by sampling from the production data. Health Safety: One of the primary concerns in prescription digitisation is the impact of errors on patient health and adherence to health regulations. To alleviate adverse outcomes, we have multiple guardrails. First, we present the top three suggestions along with scores for each medication for two-fold review by customer and pharmacist. Second, to avoid prescription abuse (e.g., manipulation of quantities, prescription reuse) and comply with regulations, there are additional checks based on the prescription date, patient purchase history, and recommended limits on medication quantities. Usage for a Limited Scope: Our proprietary system has been trained for a specific-use case, i.e., prescription digitization with acceptable performance on primarily English printed prescriptions for India region. We plan to use the model within this limited scope and expand usage only after adequate benchmarking. To limit the risks of misuse, we do not plan to release this system externally. ## 10 Acknowledgements We would like to thank our product team (Sourabh Jeswani, Jayaramakrishnan Balasubramanian) and engineering team (Varnit Agnihotri, Ranjith Sompalli, Saurabh Gupta, Amir Kamal, Vinay Vaidya) for supporting us during problem formulation, data gathering and providing continuous feedback. ## References [1] Amazon Rekognition. https://aws.amazon.com/ rekognition/. [2] Amazon Textract. https://aws.amazon.com/ textract/. [3] Fast Healthcare Interoperability Resources. https: //www.hl7.org/fhir/. [4] Google Cloud Vision. https://cloud.google. com/vision/. [5] NLP Research Data Sets. https://www.i2b2.org/ NLP/DataSets/. [6] Joe Barrow, Rajiv Jain, Vlad Morariu, Varun Manjunatha, Douglas Oard, and Philip Resnik. A joint model for document segmentation and segment labeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 313–322, 2020. [7] David S. Batista. Named-entity evaluation metrics based on entity-level. https: //www.davidsbatista.net/blog/2018/05/09/ Named_Entity_Evaluation/, 2018. [8] Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In *Proceedings of the 22nd* ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, page 785–794, 2016. [9] Arthur Flor de Sousa Neto, Byron Leite Dantas Bezerra, Alejandro Héctor Toselli, and Estanislau Baptista Lima. HTR-Flor: A deep learning system for offline handwritten text recognition. In 33rd SIBGRAPI Conference on Graphics, Patterns and Images, pages 54–61, 2020. [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019. [11] Son Doan, Lisa Bastarache, Sergio Klimkowski, Joshua C Denny, and Hua Xu. Integrating existing natural language processing tools for medication extraction from discharge summaries. *Journal* of the American Medical Informatics Association, 17(5):528–531, 2010. [12] Alex Graves and Jürgen Schmidhuber. Offline handwriting recognition with multidimensional recurrent neural networks. In *Advances in Neural Information* Processing Systems, volume 21, 2008. [13] Benedict Guzman, Isabel Kayu Metzger, Yindalon Aphinyanagphongs, and Himanshu Grover. Assessment of amazon comprehend medical: Medication information extraction. *ArXiv*, abs/2002.00481, 2020. [14] Cheryl Clark Meredith Keybl David Tresner-Kirsch John Aberdeen, Samuel Bayer. An annotation and modeling schema for prescription regimens. J Biomed Semantics 2019; 10(1): 10., 2019. [15] Anoop R Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes Höhne, and Jean Baptiste Faddoul. Chargrid: Towards understanding 2d documents. In *Proceedings of the* 2018 Conference on Empirical Methods in Natural Language Processing, pages 4459–4469, Brussels, Belgium, 2018. [16] Ali Can Kocabiyikoglu, Jean-Marc Babouchkine, François Portet, and Raheel Qader. Neural medication extraction: A comparison of recent models in supervised and semi-supervised learning settings. In IEEE 9th International Conference on Healthcare Informatics (ICHI), pages 148–152, Aug 2021. [17] A. L. Koerich, R. Sabourin, and C. Y. Suen. Large vocabulary off-line handwriting recognition: A survey. Pattern Analysis & Applications, 2003. [18] Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. Graph convolution for multimodal information extraction from visually rich documents. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 32–39, Minneapolis, Minnesota, 2019. [19] Bodhisattwa Prasad Majumder, Navneet Potti, Sandeep Tata, James Bradley Wendt, Qi Zhao, and Marc Najork. Representation learning for information extraction from form-like documents. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 6495– 6504, 2020. [20] Eric Medvet, Alberto Bartoli, and Giorgio Davanzo. A probabilistic approach to printed document understanding. International Journal on Document Analysis and Recognition (IJDAR), 2011. [21] Rasmus Berg Palm, Dirk Hovy, Florian Laws, and Ole Winther. End-to-end information extraction without token-level supervision. In *Proceedings of the* Workshop on Speech-centric Natural Language Processing, pages 48–52, Copenhagen, Denmark, 2017. Association for Computational Linguistics. [22] Rasmus Berg Palm, Ole Winther, and Florian Laws. Assessment of amazon comprehend medical: Medication information extraction. *ArXiv preprint*, abs/1708.07403, 2017. [23] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information Processing* Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, pages 8024–8035, 2019. [24] Jon Patrick and Min Li. High accuracy information extraction of medication information from clinical notes: 2009 i2b2 medication extraction challenge. Journal of the American Medical Informatics Association, 17(5):524–527, 2010. [25] Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. Med-BERT: pre-trained contextualized embeddings on large-scale structured electronic health records for disease prediction. *ArXiv preprint*, abs/2005.12833, 2020. [26] Burr Settles and Mark Craven. An analysis of active learning strategies for sequence labeling tasks. In *Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing*, pages 1070–1079, 2008. [27] Carson Tao, Michele Filannino, and Zlem Uzuner. Prescription extraction using CRFs and word embeddings. *J. of Biomedical Informatics*, 72(C):60–66, 2017. [28] Ozlem Uzuner, Imre Solti, and Eithon Cadag. Extracting medication information from clinical text. Journal of the American Medical Informatics Association, 17(5):514–518, 2010. [29] Shaolei Wang, Yue Zhang, Wanxiang Che, and Ting Liu. Joint extraction of entities and relations based on a novel graph scheme. In *Proceedings of the* Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4461–4467, 2018. [30] Xuan Wang, Vivian Hu, Xiangchen Song, Shweta Garg, Jinfeng Xiao, and Jiawei Han. ChemNER: Fine-grained chemistry named entity recognition with ontology-guided distant supervision. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 5227– 5240, 2021. [31] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. LayoutLMv2: Multi-modal pre-training for visuallyrich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579–2591, 2021. [32] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. LayoutLM: Pre-training of text and layout for document image understanding. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, pages 1192–1200, 2020. [33] Yufeng Zhang, Xueli Yu, Zeyu Cui, Shu Wu, Zhongzhen Wen, and Liang Wang. Every document owns its structure: Inductive text classification via graph neural networks. In *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics, pages 334–339, 2020. [34] Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, and Ting Liu. Document modeling with graph attention networks for multi-grained machine reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6708–6718, 2020. ## A Additional Solution Details A.1 Prescription Schema A.2 Annotation Tasks A.3 Advice Block Detection A.4 Medication Attribute Extraction | Field | Type | Description | | | |-----------------------------|-----------------------------------|--------------------------------------------|-----|-------------| | Medicine ID | Identifier Unique code in catalog | | | | | Duration | Numeric | #Days to consume the medicine | | | | Consumption Pattern | Enum | Consumption pattern of the doses, e.g., 1-0-0 For medications consumed with gaps across | | | | Repeat | Fre | | | | | quency | Numeric | days | | | | As Needed Indicator | Boolean | Set to true if medicine is to be taken SOS | | | | Dosage Size | Numeric | Size of the dose | | | | Dosage Units | Enum | Units | for | quantifying | | dose (e.g., 1 ml, 1 tablet) | | | | | | Additional Instruction | String | Guidelines on consuming the medicine | | | | Annotation Task | Labels | |------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------| | Block | Medical advice, Other | | Identification Medication Item | B, I, O label for medication item segments | | Chunking Medication Attribute Extraction | B, I, O labels based on entities below a) DURATION b) FORM c) FREQUENCY d) INGREDIENT e) ITEM-MARKER f) BRAND-NAME g) STRENGTH | Refer to Table 5 for prescription schema. Table 5: Schema for digitized prescription which is compliant with FHIR standard. Refer to Table 6 for details on annotation tasks. Table 6: Annotation of VRD is done in three ways - (a) forming BB around relevant block such as medical advice, (b) identification of series of tokens which form one medication item, (c) extraction of attributes required for identifying medicinal items, e.g., BRAND-NAME, STRENGTH. Figure 5 shows the reduction in length of prescriptions with advice block detection. ## A.4.1 Training Setup And Details For training LayoutLMV2 model (our best performing model), PyTorch [23] is used and the pretrained model is taken from open-source Huggingface library. Batch size of 2 and dropout of 0.1 is used (a) Token count density distribution for prescriptions (b) Token count density distribution for Advice blocks ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) ![10_image_2.png](10_image_2.png) for model training. Learning rate schedule and loss convergence curves are shown in Figure 6. Model architecture for LayoutLMv2 is shown in Figure 7. Adam optimizer is used with exponential decay rates for first moment and second moment estimated as 0.9 and 0.99 respectively. ![10_image_3.png](10_image_3.png) ## A.4.2 Efficient Use Of Unlabeled Data. Training the model with an increasing number of randomly chosen prescriptions indicated that there is improvement in performance, but at a relatively slow rate. Since labeling effort is much more expensive than acquisition of unlabeled prescriptions, we explored using common active learning methods [26] to prioritize the selection of prescriptions for labeling. Figure 8 shows the learning curves using increasing training data size with selection based on random sampling, entropy of class posteriors, and product of entropy as well as normalized occurrence frequency in the unlabeled data. The results point to potential benefits of judicious prioritization but more exploration is required to optimally combine the entropy and frequency signals. ## B Related Work Our work is primarily related to four areas of research that we briefly review below. Document AI is a multi-disciplinary area centered on understanding visually rich documents (VRDs) using techniques [22, 20] spanning computer vision, layout understanding, natural language understanding, and information retrieval. Document AI techniques that combine Optical Character Recognition (OCR) [4, 2, 1] with graph neural networks [33, 34, 18, 29, 19] have proven to be effective at extracting structured information from documents images, especially for well-formatted printed documents with tables and headers such as invoices. However, these methods perform poorly on documents with uneven layout and handwritten content, such as medical prescriptions. Recent models such as LayoutLM [32, 31] that jointly ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) learn the layout, visual, and text signal from a large corpus of document images improve performance with uneven layout. Handwritten text recognition (HTR) remains an open challenge despite advances in multi-dimensional RNNs and transformer models [9, 12, 17] due to the variability in author style and limited supervision. Incorporating domain-specific language models is, thus, critical for domain-specific HTR. We combine ideas from OCR, LayoutLM, and domain catalog-based matching to create a tailored solution for our application. Information extraction techniques [29, 21, 13] that deal with conversion of unstructured text to structured form, especially forming blocks of interest comprising lists of multi-attribute records are directly relevant to our application. These methods primarily use coupled models for segmentation and attribute detection (i.e. entity recognition (ER)), based on conditional random fields in combination with semantic embeddings derived from seq2seq models such as BERT [10], Bi-LSTMs and require extensive labeled data. Since such supervision is limited in our scenario, we decouple segmentation and attribute extraction tasks, using simpler approaches for the former and exploring the SOTA ER techniques while incorporating ideas on exploiting ontologies [30]. Prescription Digitization has seen rising interest in recent years with standardization of health data resources [3, 14]. Most techniques [28, 24, 11, 16], however, fixate on the ER aspects assuming the input is an unstructured text sequence and present results on benchmark datasets [27, 13, 5] of printed clinical documents from Western marketplaces. These models are inadequate for unstructured prescriptions since these do not account for the extraction errors, layout signals, and the gaps in the vocabulary. Therefore, we focus on developing a holistic approach with raw noisy prescriptions as input.
zhang-etal-2023-human
Human-in-the-loop Schema Induction
https://aclanthology.org/2023.acl-demo.1
Schema induction builds a graph representation explaining how events unfold in a scenario. Existing approaches have been based on information retrieval (IR) and information extraction (IE), often with limited human curation. We demonstrate a human-in-the-loop schema induction system powered by GPT-3. We first describe the different modules of our system, including prompting to generate schematic elements, manual edit of those elements, and conversion of those into a schema graph. By qualitatively comparing our system to previous ones, we show that our system not only transfers to new domains more easily than previous approaches, but also reduces efforts of human curation thanks to our interactive interface.
# Human-In-The-Loop Schema Induction Tianyi Zhang∗1, Isaac Tham∗1, Zhaoyi Hou∗1, Jiaxuan Ren1**, Liyang Zhou**1 Hainiu Xu1, Li Zhang1, Lara J. Martin1, Rotem Dror1, Sha Li2**, Heng Ji**2 Martha Palmer3, Susan Brown3, Reece Suchocki3**, and Chris Callison-Burch**1 1 University of Pennsylvania, 2 University of Illinois Urbana-Champaign 3 University of Colorado, Boulder, ∗equal contribution {zty, joeyhou, ccb}@upenn.edu ## Abstract ![0_Image_0.Png](0_Image_0.Png) Schema induction builds a graph representation explaining how events unfold in a scenario. Existing approaches have been based on information retrieval (IR) and information extraction (IE), often with limited human curation. We demonstrate a human-in-the-loop schema induction system powered by GPT-3.1 We first describe the different modules of our system, including prompting to generate schematic elements, manual edit of those elements, and conversion of those into a schema graph. By qualitatively comparing our system to previous ones, we show that our system not only transfers to new domains more easily than previous approaches but also reduces efforts of human curation thanks to our interactive interface. ## 1 Introduction Event-centric natural language understanding (NLU) has been increasingly popular in recent years. Systems built from an event-centric perspective have resulted in impressive improvements in numerous tasks, including open-domain question answering (Yang et al., 2003), intent prediction (Rashkin et al., 2018), timeline construction (Do et al., 2012), text summarization, (Daumé and Marcu, 2006) and misinformation detection (Fung et al., 2021). At the heart of event-centric NLU lie event schemas, an abstract representation of how complex events typically unfold. The study for such a representation dates back to the 70s, where scripts were proposed as a series of sequential actions (Roger C. Schank, 1977). Back then, the schemas were limited to linear and temporal ones. A more recent formulation of event schemas is a graph where the vertices are event flows and the edges are temporal or hierarchical relations between those events (Du et al., 2022). 1Webpage: https://www.kairos.jiaxuan.me; Video:https://www.youtube.com/watch?v=myru-fozVWI Figure 1: An example of Cyber Attack schema. The tree structure represents the temporal and hierarchical relations between the nodes. For example, as shown in Figure 1, the event schema for a "cyber attack" could include subevents such as "gain access", "control system", "exfiltrate files", "modify system logs", etc. The schema would also include the relationships between these sub-events. For instance, the event "gain access" would take place *before* the event "modify system logs" since a person needs access to a system before modifying it. For the same reason, "exfiltrate data" would only take place *after* "gain access". Event schemas like this encode highlevel knowledge about the world and allow artificial intelligence systems to reason about unseen events (Du et al., 2022). The DARPA Knowledge-directed Artificial Intelligence Reasoning Over Schemas (KAIROS) program2aims at developing schema-based AI sys2https://www.darpa.mil/program/knowledge-dir ected-artificial-intelligence-reasoning-over-sch ![1_image_0.png](1_image_0.png) tems that can identify, comprehend, and forecast complex events in a diverse set of domains. To enable such a system, scalable generation of highquality event schemas is very crucial. On one hand, fully-manual schema creation at a large scale can be inefficient, since people have diverse views about a certain concept, leading to inconsistent schema results. On the other hand, fully automated systems are scalable, but not with high-quality. In fact, the majority of existing approaches under the KAIROS program are fully-automated IR and IE systems over large collections of news articles (Li et al., 2020, 2021). Only some of limited human post-processing on schemas (Ciosici et al., 2021) have been explored. Further discussion of the advantages and limitations of existing systems can be found in Related Work. Instead of focusing on fully-automated schema induction systems, we propose a human-in-theloop schema induction pipeline system. Rather than using IR and IE over a large document collection, our system relies on pre-trained large language models (LLMs) and human intervention to jointly produce schemas. Our main motivation is that human-verified schemas are of higher quality. That is because human curation can filter out failure cases such as incompletness, instability, or poor domain transfer results in previous systems (Dror et al., 2022; Peng et al., 2019). With human curation, schemas are more reliable and accountable when applied to downstream tasks such as event prediction. This is significant if the downstream tasks involve safety-critical applications like epidemic prevention, where the quality of the schema matters beyond task performance numbers. emas Figure 2 is a flowchart of our four-stage schema induction system: step generation, node extraction, **graph construction**, and **node grounding**. Each stage has two main components: the LLM (e.g. GPT-3) at the back-end to output predictions (the purple boxes in the figure) and an interactive interface at the front-end for human curation of the model output (the yellow boxes). The GPT-3 prompts that are used in each stage of the process are given in the Appendix A, along with example inputs and outputs. A more comprehensive description of the implementation and functionalities of our interface can be found in Section 4. A case study is given in Section 5. It walks through each step in our pipeline system under an example scenario, cyber attack. Also, in Section 5, we provide a qualitative evaluation of five example scenarios. The summary and discussion of our system are included in Section 6. ## 2 Related Work 2.1 Schema Induction Early work from Chambers and Jurafsky (2008, 2009) automatically learned a schema from newswire text based on coreference and statistical probability models. Later, Peng and Roth (2016); Peng et al. (2019) generated an event schema based on their proposed semantic language model (like an RNN structure). Their work represented the whole schema as a linear sequence of abstract verb senses like arrest.01 from VerbNet (Schuler, 2005). Those works had two main shortcomings: first, the schema was created for a single actor (protagonist), e.g. suspect. It caused limited coverage in a more complex scenario, e.g. business changeacquisition; second, the generated schema, a simple linear sequence, failed to consider different alternatives such as XOR. More recently, Li et al. (2020, 2021) used transformers to handle schema generation in a complex scenario. It viewed a schema as a graph instead of a linear sequence. However, this approach was unable to transfer to new domains where the supervised event retrieval and extraction model failed. Dror et al. (2022) took GPT-3 generated documents to build a schema. Although it bypassed the event retrieval and extraction process and solved the domain transfer problem, it suffered from the incompleteness and instability of GPT-3 outputs. Currently, neither do they offer a perfect solution for schema induction without manual postprocessing, nor build a timely human correction system (Du et al., 2022). Our demonstration system develops a curation interface that can generate a comprehensive schema easily with a human curator in the loop. The curated data collected through our tool could be useful for fine-tuning and improving the models. ## 2.2 Human-In-The-Loop Schema Curation Interface Another area related to our work is human-in-theloop schema generation, where annotators collaborate with computational models to create highquality event schema. In this field, one of the closest approachs is the Machine-Assisted Script Curation (Ciosici et al., 2021) created for script induction. With a fully interactive interface, they have shown the feasibility of realtime interaction between humans and pre-trained LLMs (e.g. GPT-2 or GPT-3). The main differences are the level of automation and adaptability to other generative models. In terms of automation, our interface makes use of pre-trained LLMs to automatically generate schema content, compared to their interface which largely counts on human input. For adaptability, our interface supports the curation of the schema generated by different language models (e.g. GPT-3 models with different sizes), which makes it possible for users to evaluate the generations of different models. In contrast, there is no such possibility in their interface. Another interface built for schema curation focuses on visualization of the schema structure, such as the temporal relations between event nodes and internal relations among entities (Mishra et al., 2021). While this interface provides a user-friendly experience when it comes to schema graph curation, it requires the user to come up with the content of event schemas in json format, which requires much more human effort compared to our interface. In addition, our interface also provides an optional grounding function after the event graph curation step, which is not presented in this interface. ## 3 Terminology And Problem Definition Our work focuses on efficiently building a schema graph of a scenario using both LLMs and human input. Following the workflow of our system (see the workflow in Figure 2), a **scenario** is a general event type that an interested party will build the schema for, e.g. a 'disease outbreak'. **Steps** are a list of sub-events generated by GPT-3 according to a prompt in the step generation stage. Each step can be a phrase or a short sentence, such as 'spread to other areas', etc. Nodes or **tuples** are subjectverb-object pairs extracted from steps at the node extraction stage, such as '(disease, spread, to other area)'. **Graphs** are a visualization of the schema, whose edges joining the nodes represente temporal and hierarchical relations. ## 4 Implementation Our pipeline system contains four sequential stages: step generation, node extraction, **graph construction**, and **node grounding**. A flowchart of the interface system is shown in Figure 2. The step generation stage generates steps for a scenario and the user can specify how many steps they would like to generate. The **node extraction** stage extracts nodes (subject-verb-object tuples) from the previous verbose steps. The **graph construction** stage orders the extracted nodes temporally and hierarchically. Meanwhile, modifications of the nodes are still possible. The **node grounding** stage maps node text to a node in the XPO ontology (Elizabeth Spaulding et al., In preparation) (derived from WikiData3). The flexible interface system allows users to either go through the entire process to create a schema from scratch or directly start at any stage to edit the model's prediction. In addition, the back-end GPT-3 models can be replaced by other user pre-trained models if deployed locally. 3https://www.wikidata.org/wiki/Wikidata:Main_P age ## 4.1 Step Generation The step generation stage aims at generating steps given a scenario. At the backend, zero-shot GPT3 incorporates a user's input into a prompt and generates ordered steps. The interface allows users to generate steps quickly with prompt templates4 or finetune the generated steps with user-designed prompts. A typical use case of the user-designed prompts is to expand a certain step to more detailed steps. For instance, a template prompt "List the steps involved in {disease outbreak}:" may create steps such as "1. Identify the symptoms of the disease; 2. Collect data from affected individuals; ...". Then, the user can re-prompt for, e.g., the second step, "List the steps involved {step2} in detail:". Additionally, users can modify and select GPT-3 generated steps easily by clicking on them. When the 'save' button is clicked, all user selected steps will be saved in the database for the use of the node extraction stage or further fine-tuning of the step generation model. A screenshot of the step generation interface with user's operations can be seen in Figure 3. ## 4.2 Node Extraction Nodes are structured representations of events in the form of a {subject, verb, object} tuple. Node extraction is to extract these nodes from the GPT3 generated steps saved in step generation stage, which are unstructured sentences. There are two methods, based on AllenNLP (Shi and Lin, 2019) or GPT-3, that users can choose from to extract nodes. The former uses AllenNLP's Semantic Role Labelling (SRL) model to extract nodes from the steps. The SRL model implements a BERT (Devlin et al., 2018) sequence prediction model to identify the predicates and the arguments (e.g. A0, A1) in a text. We simply choose the identified A0 as subject, A1 as object, and predicate as the verb to form a node. An optional coreference resolution model can be used to resolve referenced entities between the different steps with an AllenNLP's SpanBERT-based model (Lee et al., 2018). Here, we concatenate all the steps and replace a pronoun with its referenced entity (noun) in the original steps. The GPT-3 node extraction method uses instructional few-shot prompting to extract {subject, verb, object} tuples from the steps. Several example sen-4an {event type} appended to a predefined prompt: Before, After or Sub-steps tences are given to show GPT-3 the expected syntactic and semantic output. We follow (Liu et al., 2022)'s recommendation for few-shot design by including context examples that are semantically similar to the KAIROS application environment (daily life and news). See appendix A for our fewshot prompts. The extracted nodes are shown to the user in a table with 3 columns (subject, verb, object). For example, for "The CDC collects and analyzes data on disease outbreaks", one of the extracted nodes is "The CDC (subject) collects (verb) data (object)". Users are able to choose and edit nodes (tuples). User edits are saved and will be used for graph construction and fine-tuning of the GPT-3 node extraction model. ## 4.3 Graph Construction In the graph construction stage, our system automatically adds temporal and hierarchical edges to the previously extracted nodes. The edges are created using zero-shot GPT-3 with multiple choice questions. For each pair of nodes, GPT-3 is instructed to choose between 'Before', 'After', 'Same time' or 'no relation' for temporal eges; and 'Parent', 'Child' or 'no relation' for hierarchical edges. For example, for the node pair "collect data" and "identify the signs and symptoms", GPT-3 predicts 'After' for temporal order and 'no relation' for hierarchical order, in which case we will add a temporal edge from "identify the signs and symptoms" to "collect data", and no hierarchical edge will be created. If a conflict occurs between (node1, node 2) pair and (node2, node1) pair, e.g. 'After' and 'After' for a temporal order or 'Parent' and 'Parent' for a hierarchical order, we will treat it as no relation to resolve the conflict, thus adding no new edges to the graph. The graph construction interface allows users to modify the GPT-3 generated schema with ease. After predicting both temporal and hierarchical relations between all pairs of nodes, the interface will display the graph via the Vis-network framework5. It supports adding, editing, deleting graph nodes and edges. When the user clicks on a node, the detailed information including the ID and description of a node will be shown as well as the button to delete or edit the node. By clicking the edge, users can modify the edge type or delete it. Users will be able to create a new node by double clicking and a new edge by dragging and dropping an arrow from two nodes. A screenshot of our graph construction interface can be seen in figure 4. ## 4.4 Node Grounding Although a schema (graph) is completely created after the previous stages, some nodes may express the same semantic information, e.g., "refugees flee" and "refugees ran away". To ensure the reliability and comparability of created schemas, our system grounds the nodes to an ontology, namely the XPO ontology, in the last stage. Each node in the XPO ontology contains a unique node ID, a node name, and a concise description (definition), and a list of similar nodes. Our system offers two ways of grounding, "name inference grounding" or "name similarity grounding". Name inference grounding maps the schema nodes to XPO nodes by predicting the XPO node's name; name similarity grounding finds the XPO nodes by comparing the similarities between the embeddings of a schema node and a XPO node's name. In name inference grounding, given a graph node, our system first uses few-shot GPT-3 to deduce a list of possible XPO names (see few-shot prompt example in appendix A). Then, the candidate XPO names are postprocessed by dropping off the wrong prediction and adding similar XPO names to the true prediction. After that, each possible XPO name will be checked for entailment with the original graph node. The entailment model is a BART-large model fine-tuned on the MNLI dataset (Lewis et al., 2020; Williams et al., 2018). The input is the original graph node as the premise and the possible XPO name as the hypothesis, and the output is the entailment score. We sort the possible XPO node names by their entailment scores. Users can view and choose from the top-k suggested XPO nodes for the grounding of the original graph node. In name similarity grounding, the top-k related XPO nodes are retrieved by computing the cosine-similarity of the GloVe embedding between the graph node and the name of XPO nodes (Pennington et al., 2014). The above two methods are complementary to each other especially when users cannot find expected XPO nodes with one method. Human-curated data is saved in the backend database. A screenshot of node grounding can be seen in Figure 5. ## 5 Evaluation 5.1 A Case Study In this section, we walk through the whole process of creating a toy schema with our interface which is much simpler than a fully developed schema. We assume the scenario is 'cyber attack'. In the step generation stage, users can form a prompt from templates such as "list the steps involved in a cyber attack" with 'cyber attack' as the name and sub-event as the prompt type. Then, GPT3 will generate 5 steps. For example, "1. A cyber attacker gains initial access to a system" and "5. The attacker exfiltrates data from the compromised system." Users can modify the content and choose steps to save. For example, one may change the first step to "1. A cyber attacker access a system." and save the step. See a screenshot of five steps for reference in figure 3. ![4_image_0.png](4_image_0.png) Next, in the node extraction stage, GPT-3 will be prompted to extract nodes from the selected steps. For example, GPT-3 will output {cyber attacker, access, system} for the first step. The user can change the outputs to correct any mistakes. In this sample, we extract 4 nodes, they are: {cyber attacker, access, system}, {attacker, enumerate, system information and user account}, {attacker, escalates, privileges}, {attacker, exfiltrate, data}. And we concatenate the {subject, verb, object} into a piece of text as a node for the next stage. Thereafter, in the graph construction stage, we prompt GPT-3 to automatically build linear temporal edges on the above four nodes that users can modify. We manually add a scenario node 'cyber attack' and link with the other four existing nodes through hierarchical edges. see a screenshot of the graph in figure 4. Finally, we can optionally ground our graph node into the XPO ontology. For example, the node "cyber attacker access system" can be mapped to choices of 'access', 'computer monitoring', 'remote communicating' using name similarity grounding. In this case, we don't get any results ![5_image_0.png](5_image_0.png) from name inference grounding. See a screenshot ![5_image_1.png](5_image_1.png) of grounding in Figure 5. ## 5.2 User Evaluation We followed the evaluation methodology used by Ciosici et al. (2021) with slight modifications to assess our system. Evaluation is done by researchers in the field of NLP who have experience in handwriting event schemas but have not used the interface before. In the step generation and node extraction stage, we count the number of human selected steps/nodes out of the total number of machine generated results as accuracy. For simplicity, we ignore users' modifications (e.g. rephrasing) at this point. In the graph construction stage, we compare how many nodes and edges are modified | EVC | FOD | JOB | MED | MRG | | |--------------------------|-------|-------|-------|-------|-------| | Step Acc | 11/12 | 7/8 | 10/10 | 10/10 | 12/12 | | Node Acc | 13/15 | 10/10 | 11/12 | 12/12 | 12/14 | | Graph Node ED | 1 | 0 | 0 | 0 | 0 | | Graph Edge ED | 8 | 0 | 7 | 3 | 16 | | Grouding | | | | | | | Success Rate | 5/12 | 3/10 | 3/11 | 6/12 | 9/12 | | Self-reported time (min) | 15 | 10 | 11 | 10 | 14 | (added or deleted) using graph edit distance. In the grounding part, the success rate is measured as successful retrieval of at least one relevant XPO node within top-3 grounding results for a given event node. We also ask users to self-report their total time of interaction. For all the evaluations, we use GPT-3 Davinci model as the language model.6 We follow prior work and evaluate our system on five scenarios: Evacuation (EVC), Ordering Food in a Restaurant (FOD), Finding and Starting a New Job (JOB), Obtaining Medical Treatment (MED), Corporate Merger or Acquisition (MRG).7 As shown in Table 1, our interactive system shows high accuracy in step and node generation phases, thanks to the richness of world knowledge from LLMs. However, the graph construction and the node grouding require more human curation, due to the difficulty of event reasoning, such as the understanding of temporal and hierarchical relationships; and the retrieval ability from large database. In those cases, we showed that human curation can step in timely and improve the quality of event schema when LLM-based models make mistakes 7. In addition, our interface is easy to use, with much shorter time required to complete each event schema task compared to previous work (Ciosici et al., 2021). We also report a qualitative study introducing the types of human modifications on the automated generations. At the step generation stage, GPTs aren't likely to make commonsense and grammar errors. However, if its required to generate more steps, it may be susceptible to redundancy, such as, *"A does B"* and *"A finishes doing B"*, and hu-6https://platform.openai.com/docs/models/gp t-3 7 Detailed evaluation results: https://joeyhou.notion .site/Human-in-the-Loop-Schema-Induction-Interfa ce-Logs-1eb52403b05542919ccea214656f4211 man removes these steps. Then, for the node extraction, results can be simplistic and ambiguous when the original sentence contains rich information, such as location, condition, or other modifiers. For example, given the step *"waitress bring order* to the kitchen", automatic node extraction produces "(waitress, bring, order)", while human needs to add back some necessary components, e.g. the location information *"kitchen"* or constraint "food order". Last, for the graph construction, current graph is often linear based on the previous nodes' order, human efforts play an essential role to elaborate on the specific relations including AND, OR. For example, *"person updates the resume"* and "person tailors the cover letter" are independent and can be concurrent, not sequential. ## 6 Conclusion With the acknowledgements that fully depending on human annotation is expensive and inefficient, while wholly automated generations can be unreliable, we propose a human-in-the-loop schema curation interface with pre-trained large language models (LLMs) as the backbone. We use LLMs to generate candidate components of a schema and involve human as the final judge for both the content and structure of the event schema. With empirical evaluations, we show that our system can efficiently produce human-validated event schemas with minori human efforts. ## Limitations We have several limitations in our current approach. First, our current system uses zero-shot or few-shot to prompt GPT-3 without any fine tuning. In future work, we plan to fine-tune our GPT-3 with human curated data that we collect. We expect that finetuning will improve our models' performance. It may also be possible to use human curated data to train a policy network recommended by OpenAI (Ouyang et al., 2022). Second, we can replace GPT-3 with more robust task specific models at some stages, e.g., the pre-trained model for predicting temporal and hierarchical orders. Third, some users suggested incorporating a graph view at the other three stages, which will help users to generate based on the current graph. We will include this graph view in our next version. Forth, our current evaluation is experimental and probably subjective, we will develop more robust evaluation metrics comparing manual, Ciosici et al. (2021)'s and our schema and test on downstream tasks in the next step. ## Ethics Statement To our knowledge, our back-end GPT-3 model was trained mainly on English web data, it may prefer events happen in an English environment. Furthermore, our test showed that it generated events specifically fit in American setting, for example, Miranda Rights for arrest, Democrats and Republicans in United States for election. These facts suggest GPT-3 may ignore the knowledge of nonAmerican cultures or minority groups. In addition, currently, we only create schemas for scenarios that are reported in mainstream news media, e.g. conflict, communication. It excludes the schemas from other domains, such as biology, medicine. ## 7 Acknowledgements This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the IARPA BETTER Program (contract 2019-19051600004 and 2019-19051600006), the IARPA HIATUS Program (contract 2022-22072200005), and the NSF (Award 1928631) and National Science Foundation under Grant \#2030859 to the Computing Research Association for the CIFellows Project. Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, DARPA, IARPA, NSF, or the U.S. Government. We thank researchers in PennNLP groups, and from other universities who gave us suggestions on the paper. ## References Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In *Proceedings of ACL-08: HLT*, pages 789–797. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602–610. 7 Manuel Ciosici, Joseph Cummings, Mitchell DeHaven, Alex Hedges, Yash Kankanampati, Dong-Ho Lee, Ralph Weischedel, and Marjorie Freedman. 2021. Machine-assisted script curation. In *Proceedings of* the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, pages 8–17, Online. Association for Computational Linguistics. Hal Daumé and Daniel Marcu. 2006. Bayesian queryfocused summarization. In *Proceedings of the 21st* International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, page 305–312, USA. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*. Quang Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 677–687, Jeju Island, Korea. Association for Computational Linguistics. Rotem Dror, Haoyu Wang, and Dan Roth. 2022. Zeroshot on-the-fly event schema induction. *arXiv* preprint arXiv:2210.06254. Xinya Du, Zixuan Zhang, Sha Li, Pengfei Yu, Hongwei Wang, Tuan Lai, Xudong Lin, Ziqi Wang, Iris Liu, Ben Zhou, Haoyang Wen, Manling Li, Darryl Hannan, Jie Lei, Hyounghun Kim, Rotem Dror, Haoyu Wang, Michael Regan, Qi Zeng, Qing Lyu, Charles Yu, Carl Edwards, Xiaomeng Jin, Yizhu Jiao, Ghazaleh Kazeminejad, Zhenhailong Wang, Chris CallisonBurch, Mohit Bansal, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Martha Palmer, and Heng Ji. 2022. RESIN-11: Schema-guided event prediction for 11 newsworthy scenarios. In *Proceedings of* the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, pages 54–63, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics. Anatole Gershman Elizabeth Spaulding, Susan Brown Rosario Uceda-Sosa, Peter Anick James Pustejovsky, and Martha Palmer. In preparation. The darpa wikidata overlay: Wikidata as an ontology for natural language processing. Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen McKeown, Mohit Bansal, and Avi Sil. 2021. InfoSurgeon: Cross-media fine-grained information consistency checking for fake news detection. In *Proceedings of the 59th Annual Meeting of the Association for* Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1683–1698, Online. Association for Computational Linguistics. Kenton Lee, Luheng He, and L. Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In *NAACL-HLT*. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880. Manling Li, Sha Li, Zhenhailong Wang, Lifu Huang, Kyunghyun Cho, Heng Ji, Jiawei Han, and Clare Voss. 2021. The future is not one-dimensional: Complex event schema induction by graph modeling for event prediction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5203–5215. Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684–695. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Piyush Mishra, Akanksha Malhotra, Susan Windisch Brown, Martha Palmer, and Ghazaleh Kazeminejad. 2021. A graphical interface for curating schemas. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 159–166. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. *arXiv preprint* arXiv:2203.02155. Haoruo Peng, Qiang Ning, and Dan Roth. 2019. Knowsemlm: A knowledge infused semantic language model. In *Proceedings of the 23rd Conference on Computational Natural Language Learning* (CoNLL), pages 550–562. Haoruo Peng and Dan Roth. 2016. Two discourse driven language models for semantics. arXiv preprint arXiv:1606.05679. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In *Proceedings of the 2014 conference* 8 on empirical methods in natural language processing (EMNLP), pages 1532–1543. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463–473, Melbourne, Australia. Association for Computational Linguistics. Robert P. Abelson Roger C. Schank. 1977. Scripts, Plans, Goals, and Understanding : an Inquiry into Human Knowledge Structures. Karin Kipper Schuler. 2005. *VerbNet: A broadcoverage, comprehensive verb lexicon*. University of Pennsylvania. Peng Shi and Jimmy Lin. 2019. Simple bert models for relation extraction and semantic role labeling. *arXiv* preprint arXiv:1904.05255. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Hui Yang, Tat-Seng Chua, Shuguang Wang, and ChunKeat Koh. 2003. Structured use of external knowledge for event-based open domain question answering. In *Proceedings of the 26th Annual International* ACM SIGIR Conference on Research and Development in Informaion Retrieval, SIGIR '03, page 33–40, New York, NY, USA. Association for Computing Machinery. ## A Appendix: Prompt And Completion A.1 Appendix A: Prompt And Example Completions For Step Generation Prompt: List the events before an attack: 1. ## Completion: The attacker gathers information about the target. 2. The attacker plans the attack. 3. The attacker gains access to the target system. 4. The attacker executes the attack. 5. The attacker covers their tracks. Prompt: List the events after an attack: 1. ## Completion: The attacker's identity is confirmed. 2. The target is notified of the attack. 3. The attacker is placed on a watch list. 4. The attacker's device is seized. 5. The attacker is arrested. Prompt: List the sub-events involved in an attack: 1. Completion: Identify the target. 2. Plan the attack. 3. Choose the weapons. 4. Assemble the team. 5. Launch the attack. 6. Evaluate the results. ## A.2 Appendix B: Prompt And Example Completions For Node Extraction Prompt: /* instruction */ For each sentence, extract event verbs and their arguments, categorizing the arguments as subject or object. Write None if there is no object. Return in [verb: _, subject: _, object: _] format. /* few-shot examples */ For example: Q: Isaac ate a cake today and he played football. A: [verb: eat, subject: Isaac, object: cake], [verb: play, subject: Isaac, object: football] Q: The teacher arrived in class and he started teaching. A: [verb: arrive, subject: teacher, object: class], [verb: start, subject: teacher, object: teaching] Q: Nate and Isaac ate dinner. A: [verb: eat, subject: Nate and Isaac, object: dinner] Q: Justin slept. A: [verb: sleep, subject: Justin, object: None] /* target example */ Q: The attacker gathers information about the target. A: Completion: [verb: gather, subject: attacker, object: information] /* target example */ Q: The attacker's identity is confirmed. A: Completion: [verb: confirm, subject: attacker's identity, object: | None] | |---------| A.3 Appendix C: Prompt and example completions for Node Grounding | /* target example */ Q: The attacker is placed on a watch list. A: Completion: | |----------------------------------------------------------------------------------| ## Prompt: /* few-shot examples */ List event names related to the event "People are infected with this disease": 1.infection 2.epidemic 3.pandemic List event names related to the event "It was a robbery-related incident": 1.robbery 2.burglary 3.theft List event names related to the event "The first case of the disease have detected and it has been reported": 1.infection 2.epidemic 3.pandemic List event names related to the event "The disease is eventually brought under control": 1.control 2.improvement List event names related to the event "People who are ill have serious symptoms": 1.symptoms List event names related to the event "The pathogen begins to spread through the population": 1.transmission 2.spread /* target example */ List event names related to the event "The attacker gathers information about the target": Completion: 1.reconnaissance 2.surveillance /* target example */ List event names related to the event "The attacker's identity is confirmed": Completion: 1.identification 2.confirmation /* target example */ List event names related to the event "The attacker is placed on a watch list": Completion: 1.surveillance 2.monitoring 3.investigation | 3.investigation | |-------------------|
shi-etal-2023-perslearn
{P}ers{LEARN}: Research Training through the Lens of Perspective Cultivation
https://aclanthology.org/2023.acl-demo.2
Scientific research is inherently shaped by its authors{'} perspectives, influenced by various factorssuch as their personality, community, or society. Junior researchers often face challenges in identifying the perspectives reflected in the existing literature and struggle to develop their own viewpoints. In response to this issue, we introduce PersLEARN , a tool designed to facilitate the cultivation of scientific perspectives, starting from a basic seed idea and progressing to a well-articulated framework. By interacting with a prompt-based model, researchers can develop their perspectives explicitly. Our humanstudy reveals that scientific perspectives developed by students using PersLEARN exhibit a superior level of logical coherence and depth compared to those that did not. Furthermore, our pipeline outperforms baseline approaches across multiple domains of literature from various perspectives. These results suggest that PersLEARN could help foster a greater appreciation of diversity in scientific perspectives as an essential component of research training.
## Perslearn **: Research Training Through The Lens Of Perspective Cultivation** Yu-Zhe Shi1 ‹ , Shiqian Li1,2 ‹, Xinyi Niu1,3 ‹, Qiao Xu1,3 ‹, Jiawen Liu1,4 ‹**, Yifan Xu**1,5 ‹, Shiyu Gu1, Bingru He1,6, Xinyang Li1,6, Xinyu Zhao1,6, Zijian Zhao1,6, Yidong Lyu1,7**, Zhen Li**1,8, Sijia Liu1,9, Lin Qiu1,10, Jinhao Ji1, Lecheng Ruan11 , Yuxi Ma11, Wenjuan Han12 **, Yixin Zhu**2 1PersLab Research 2Peking University 3Huazhong University of Science and Technology 4Tongji University 5Kyushu University 6Tsinghua University 7Chinese University of Hong Kong 8University of Chinese Academy of Sciences 9University of the Arts London 10University of Hong Kong 11National Key Laboratory of General Artificial Intelligence, BIGAI 12Beijing Jiaotong University ‹Equal contributors syz@perslab.co, ruanlecheng@bigai.ai, wjhan@bjtu.edu.cn, yixin.zhu@pku.edu.cn ## Abstract Scientific research is inherently shaped by its authors' perspectives, influenced by various factors such as their personality, community, or society. Junior researchers often face challenges in identifying the perspectives reflected in the existing literature and struggle to develop their own viewpoints. In response to this issue, we introduce PersLEARN , a tool designed to facilitate the cultivation of scientific perspectives, starting from a basic seed idea and progressing to a well-articulated framework. By interacting with a prompt-based model, researchers can develop their perspectives **explicitly**. Our human study reveals that scientific perspectives developed by students using PersLEARN exhibit a superior level of logical coherence and depth compared to those that did not. Furthermore, our pipeline outperforms baseline approaches across multiple domains of literature from various perspectives. These results suggest that PersLEARN could help foster a greater appreciation of diversity in scientific perspectives as an essential component of research training. 1 ## 1 Introduction The pursuit of science is driven by a desire to gain a deeper understanding of the natural world, not only through the collection of objective facts but also through interpreting those facts (Kuhn, 1970; Longino, 1990). As a result, scientific knowledge is shaped by a complex interplay of various factors that extend beyond the objective world. These factors include the personal characteristics of individual scientists (Heisenberg, 1958; Bybee, 2006), shared mindsets within scientific communities (Cetina, 1999), and broader societal contexts such as cultural and political influences (Latour and Woolgar, 1986; Latour, 1987; Lynch, 1993; Latour et al., 1999). Together, these factors contribute to 1Website: https://perslearn.com/. Video: https://vimeo.com/802213150. ![0_image_0.png](0_image_0.png) Figure 1: Composed summaries vs**. framed perspectives.** Composed summaries are subject to the authors' perspectives, whereas the perspective frames are directed by new ideas. forming perspectives regarding how best to interpret the natural world. Perspectives are essential to effectively process and communicate scientific knowledge with limited cognitive resources (Lewis et al., 2014; Griffiths et al., 2015; Gershman et al., 2015; Lieder and Griffiths, 2020). However, junior researchers often face difficulties in developing their own scientific perspectives. They may struggle to identify the perspectives reflected in the existing literature and consequently struggle to develop and articulate their own viewpoints. This presents a significant obstacle to the progress of research training and deprives junior researchers of the opportunity to embrace the broader range of diverse perspectives that could contribute to their understanding of a particular topic (Duschl and Grandy, 2008). The challenge of developing scientific perspectives is particularly evident in one of the most significant research training approaches—writing literature reviews. In our pilot study, we asked students studying at the intersection of Artificial Intelligence (AI) and Cognitive Reasoning (CoRe) to write a review article from the perspective of "physics-based reasoning in Computer Vision (CV)" using a set of papers published on CV conferences. The assigned task aims to provide students with a multifaceted ![1_image_0.png](1_image_0.png) PersLEARN is grounded in classical theories drawn from the fields of cognitive and social sciences, particularly in the domain of scientific knowledge representation (Sec. 2.1). It provides an entire life-cycle of constructing a perspective frame that semi-automates researchers to start from a single seed idea and then iteratively interpret and structure relevant literature (Sec. 2.2). This process is facilitated through an interactive system that employs a hierarchical prompt-based approach to propose potential interpretations and structures based on a seed idea (Sec. 2.3). Experiments on both human evaluation (Sec. 3) and automatic evaluation of each module (Sec. 4) suggest that PersLEARN has the potential to enhance the quality of scientific research training significantly. ## 2 Design And Implementation Designing PersLEARN is required to answer two questions: (i) What is the appropriate representation of perspective frames that makes researchers comfortable? (ii) How to informationize such representation for both user input and automated generation? In response to the questions, we highlight how PersLEARN is implemented from a theoretical framework to an interactive system step by step. ## 2.1 Theoretical Framework2 Following the principle of analogical education (Thagard, 1992; Aubusson et al., 2006), we create a system of analogies to ground the abstract concepts about perspectives. First, the scientific knowledge covered by the literature about a seed idea is in a 2View an abstract video illustration of the framework: https://vimeo.com/802213146. ![2_image_0.png](2_image_0.png) Figure 3: **Illustration of perspective cultivation.** (A) Visual analogy of the process: interpreting the evidence in the papers given the seed idea and structuring the papers with the relations between them. (B) User interfaces during the process. higher-dimensional space than the perspective of a single paper (Duschl and Grandy, 2008). Here we set scientific knowledge of a seed idea as a 3D space and the specific perspective as a 2D plane for readability. For example, the seed idea "CV approaches on physics-based reasoning tasks" on the intersection of physics and CV can be framed as different specific perspectives, such as "physicsbased reasoning" (Zhu et al., 2020), "AI models for modeling human cognition" (Lake et al., 2017), "evaluation metrics of new tasks in AI" (Duan et al., 2022), "ethics in human-level tasks for AI" (Jackson Jr, 2018), and "interpretability of physics-based reasoning AI models" (Edmonds et al., 2019). To not be trapped in a single perspective, we should pay attention to the ingredients of the papers rather than the ideas claimed by the authors. On this basis, framing another perspective is projecting the 3D space to another 2D plane by making *slices* from the papers, where each slice is a subset of ingredients. Such slices are articulated with others under the logic of the seed idea. Thus, a perspective frame is cultivated on the plane, growing from a seed idea with few slices to a graph with slices connected (see Fig. 3 for details). Formally, the perspective frame is organized as a graph with information in nodes and edges on a 2D plane that instantiates the seed idea from the 3D space of scientific knowledge. The elements in a perspective frame can be described as follows: - **Seed idea**: A rough textual description of the perspective, e.g., "Physics-based reasoning using CV approaches," which serves as the starting point of the literature review and should be determined at the very beginning. - **Evidence**: A piece of evidence comes from every paper in the selected set of literature, which contains the grounded information (a text span) supporting the given seed idea. - **Slice**: A slice is the textual *interpretation* conditioned on the given seed idea based on a piece of evidence. A slice is a node in the graph. - **Connection**: A connection between two slices is the textual interpretation conditioned on the perspective given the relation (e.g., relationsin-common such as inspire and parallel; and relations-of-distinction such as improve, alternate, and compete) between two slices. A connection is an edge in the graph. Fig. 2 shows the interactive workflow of PersLEARN . Suppose one concerns the "model interpretability" (seed idea) of LSTM and Transformer, which is not the major perspective of either original paper of the two models. Given the corresponding two papers *'Long short-term memory'* and *'Attention is all you need'*, the **evidence generator** finds the evidence to support the seed idea from the papers: *'A multiplicative input gate unit is* introduced to protect the memory contents stored in j from perturbation by irrelevant inputs. Likewise, a multiplicative output gate unit is introduced which protects other units from perturbation by currently irrelevant memory contents stored in j. The resulting, more complex unit is called a memory cell.' and 'As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, but many also appear to exhibit behavior related to the syntactic and semantic structure of the sentences.' The **slice generator** then generates the interpretations: 'An LSTM unit uses a series of gates to control the flow of information through the unit, which makes its operations opaque.' and 'The Transformer model can explicitly learn the attention distributions of the input sequence which is interpretable to some extent.' The **connection** generator finally provides the connection between these slices: 'Transformers generally offer a higher degree of interpretability than LSTMs due to their self-attention mechanism.'. Such cultivation of a brand new perspective helps students think outside the box, which usually yields innovation in scientific research and should serve as one of the major parts in research training. Notably, elements such as evidence, slices, and connections are not determined at once but may be revised in multiple iterations. As the perspective frame grows, the researcher's understanding of the seed idea goes deeper, and the contents of slices and connections are sharpened accordingly. Hence, instead of answering a *chicken-or-the-egg* problem between slices and connections, our users generate them iteratively. Varied by the seed ideas, a perspective can be a well-organized collection of information (e.g., "performance comparison between backbone models on physical-reasoning tasks" (Duan et al., 2022)), a statement (e.g., "intuitive physics may explain people's ability of physical reasoning" (Kubricht et al., 2017)), or a problem (e.g., "physical reasoning by CV approaches" (Zhu et al., 2020)). Though coming with different levels of abstraction, they all bring information gain, more or less (Abend, 2008). PersLEARN well echoes the established theories, suggesting our design's integrity. In a perspective frame, elements are contextualized in the entire frame by connecting with each other (Grenander, 2012; Shi et al., 2023); no element's meaning is determined solely by itself. Moreover, any revision of an element influences the larger structure. Such representation has been shown as an innate knowledge representation of humans—*theory theory* (Gopnik, 1994; Gopnik and Meltzoff, 1997; ![3_image_0.png](3_image_0.png) its interpretation. (B) A generated perspective frame. Carey, 1985, 2009). Furthermore, Carey (1986, 2000) have shown that such a framework can be captured and gradually revised by young students in terms of science education. To the best of our knowledge, current tools for literature review composing (e.g., ResearchRabbit, Connected Paper, Inciteful, and litmaps) all focus on visualizing literature relationships based on similarity and citation relationships without explicitly considering the framing of diverse perspectives. ## 2.2 Implementing User Interface (Ui) A researcher may develop a seed idea when reading a few papers, even if it is far from a mature perspective. The user first locates the evidence in a paper by dragging the mouse to select the text span through the PDFViewer and adds the selected span into Evidence Hub. Next, the user could generate a slice by writing a textual interpretation of the paper based on the evidence; this would trigger the initialization of a new perspective frame, and the first slice can be dragged into the canvas (implemented by D3.js library (Bostock et al., 2011)). The user can get back to the papers for more pieces of evidence and back to revising interpretations by clicking on the slices and editing the information at the right bar. With more than one slice in the canvas, the user can connect two slices by dragging the mouse around them and then write a textual interpretation of the relation between them. Likely to edit the slices, the user can also edit the connections by clicking on them and editing the information at the right bar. The perspective frame is cultivated by repeating these steps, buliding up the mindset for perspective framing in the *learning by doing* principle (Schank et al., 1999). Please refer to Fig. 4 for an exemplar perspective frame. In the user-centered design of UI (Zaina et al., ![4_image_0.png](4_image_0.png) Slice generation. (C) Connection generation. 2021), we follow the established theories in design for education (Hu et al., 1999; Miraz et al., 2016), such as color classification (Wen, 2021) and hierarchical information display (Jinxian, 2020). These support the integrity of our UI design. ## 2.3 Semi-Automating The Procedure We employ a hierarchical prompt-based approach to semi-automate the slice generation and connection generation. PersLEARN automatically generates some candidate proposals of slices and connections, and users can choose to accept, delete or modify these proposals. Generate proposals of slices Scientific papers generally have similar and main-streaming structures (Doumont, 2014). Humans read scientific papers effectively while considering this prior structure rather than browsing aimlessly. We leverage this intuition by proposing the hierarchical prompt-based approach. This approach takes the seed idea and partial texts of the paper (i.e., Abstract, Introduction, Discussion, and Conclusion) sections as input, and outputs the proposals of slices. We designed a hierarchical prompt-based approach (see Appx. A.1). First, we parse the seed idea to identify the specific field and domain of interest and fit the parsed terms into the prompting schema. Next, the prompted Large Language Model (LLM) extracts sentences from papers as evidence proposals. The LLM generates slice proposals conditioned on the evidence. Specifically, it consists of two prompting stages: prompt generation and answer extraction. In the first stage, we first prompt an LLM with a generated prompt. After the LLM generates a response, we extract the information as the answer. Next, we traverse the hierarchical prompting schema from the top down to adopt a prompt template. Finally, we concatenate it with the texts of the paper as the prefix to generate the response. In the second stage, we post-process the response by removing repeated words and punctuation marks such as extra spaces. Generate proposals of connections Similar to slice generation, generating proposals of connections follows a prompt-based approach. The LLM takes two slices as input and outputs the relation between these two slices. A connection shows the relation between two slices (e.g., relationsin-common such as inspire and parallel; and relations-of-distinction such as improve, alternate, and compete). Hence, we design the prompt as a multiple-choice question. Our approach avoids uncontrollable and timeconsuming manual designing while achieving comparable performance compared to existing fullymanual methods. Since we use the zero-shot setting, labor-consuming labeling is not required. ## 3 Human Evaluation To validate PersLEARN for research training, we conducted a human study following the standard protocols of digital device auxiliary scenarios in higher education (Van den Akker, 1999; Neuman, 2014). This study is approved by the Institutional Review Board (IRB) of Peking University. ## 3.1 Method Materials We created a scenario that simulates the training on writing literature reviews. The literature used in our simulation is five papers published at computer vision conferences. These papers have different topics varying from 3D scene parsing and reconstruction to learning object properties and using tools. However, they can be integrated together by interpreting from a physics-based perspective. Participants We recruited 24 participants from the Peking University participant pool (11 female; mean age = 22.63). Every participant was paid a wage of $14.6/h. We evenly divided participants into the control and experimental groups. Procedures All participants were required to read the five papers and compose a short paragraph of literature review given the perspective "Physicsbased reasoning." Only the abstract, introduction, and conclusion/discussion were mandatory to read to reduce workload. The experiments lasted for 1 hour. The control group followed the standard procedure of writing reviews without PersLEARN as researchers usually do in their studies: reading the raw papers and writing the review. The experimental group utilized PersLEARN to create the review: locating evidence, interpreting, illustrating relations, and synthesizing the review. All participants were free to use the internet for extra help, such as searching for new concepts and unfamiliar words. ## 3.2 Result We evaluate PersLEARN both quantitatively and qualitatively to verify whether it helps students compose more logical and pertinent reviews. Quantitative evaluation The reviews from the control and experimental groups were shuffled and sent to experts to grade. The grading metrics include logicality and pertinence. Specifically, we asked 3 experts to grade on consistency (Farkas, 1985), rationality (Kallinikos and Cooper, 1996), organization (Kallinikos and Cooper, 1996), topic relevance (Hayes, 2012), opinion clarity (Williams, 1990), and concreteness (Sadoski et al., 2000); each ranks from 1 to 5. All of the experts hold Ph.D. degrees in related fields of AI, have been working on AI for at least eight years, and have no conflict of interest with the authors of this paper. The average scores of the control and experimental groups are 21.25 and 25.08, respectively. Fisher's exact test on the two variables (i.e., whether PersLEARN was used and the score) reveals that the experimental group significantly outperforms the control group in both logicality and pertinence (P " 0.0361; see Fig. 6a), suggesting participants exploit the interpretations from a particular perspective and organize them by inducing their relations. Such a paradigm equips them with improved research training. Detailed scores on 6 evaluation metrics are shown in Fig. 6b. The results demonstrate a noticeable improvement in academic review writing in terms of logicality and pertinence for the experimental group; the experimental group's performance shows a clear shift towards higher scores. Qualitative evaluation We further conducted an interview to record qualitative comments after the participants in the control group finished their experiments. We interviewed them on how PersLEARN contributed to reading papers and composing reviews; see Appx. B.3 for interview questions. Most participants stated that PersLEARN helped them better understand the content of articles, think more clearly, and organize their writing expediently. For future work, they hoped to embed intelligent agents to provide protocols for each procedure. ## 3.3 Discussion We present a case study to show how the experimental group composes better reviews than the control group; see representative paragraphs in ![5_image_0.png](5_image_0.png) scores in the plot may surpass 5.0 due to smoothing. Figure 6: **The scores of the control and experimental groups.** The scores of each review are averaged across experts. Appx. C.1. We conclude from our human study that PersLEARN can boost literature reading and review writing by providing a perspective-guided thinking framework of evidence locating, interpretation deriving, and relation inducing. ## 4 Automatic Evaluation To automatically evaluate PersLEARN at scale, we introduce a *perspective reconstruction* task with three sub-tasks (slice generation, connection generation, and diversity evaluation), requiring the system to recover an established perspective frame given the same seed idea. ## 4.1 Benchmark Construction Dataset Collection We carefully construct a testing set with reputation-established narrative reviews, expert reviews, and opinion articles to obtain a high-quality ground truth of perspectives. Systematic reviews and articles of information collection are removed from the set because such papers do not provide a sharp and unique perspective; we ensure that all articles are developed around a concrete and coherent perspective. Moreover, we ensure that every title is the epitome of the perspective held by the article; we treat the titles as seed ideas. 36 review articles are collected from diverse domains standing at the intersection of AI and CoRe, including CV, Natural Language Processing (NLP), Intuitive Physics (Phy), Causality (Cau), Abstract Reasoning (AbsRe), Mirroring and Imitation (MrIm), Tool Use (Tool), Non-verbal Communication (NvComm), Intentionality (Int), Theory Table 1: **Result of our pipeline.** w/o and w/ are with and without prompt engineering, respectively. The performance of slice generation, connection generation, and perspective diversity indicate the efficacy of our prompt engineering. Metric w/o w/ Slice BLEURT 0.238 **0.795** Connection CR 0.450 **0.550** Perspective VMR 0.028 **0.006** of Mind (ToM), and Utility (U). This generates a literature set with 333 papers cited by at least one of the articles. Among these, 24 papers are cited by more than one article. Some of the papers are directly obtained from S2ORC (Lo et al., 2020), while others are parsed from raw PDF. Evaluation Metrics of slice generation For a cited paper in the original review, we treat the coherent sentences around the citation mark as the ground truth for the corresponding slice, following the same protocol as in Li et al. (2022). Because the semantic meaning is critical (rather than the wording and phrasing), we employ BLEURT (Sellam et al., 2020) rather than word-wise evaluation metrics like ROUGE and BLEU (Lin, 2004; Papineni et al., 2002). BLEURT score indicates the similarity between two statements; larger scores mean better performance. Evaluation Metrics of connection generation Since the connection between two papers under the same perspective is only conditioned on the slices, we focus on the logical consistency between the generated connection and the two input slices. Following the setting of Natural Language Inference (NLI), we calculate the Consistent Rate (CR), the proportion of entailment prediction in all predictions. Higher CR indicates better performance. We employ the state-of-the-art model, DeBertaV3 (He et al., 2021b,a), as the NLI model for evaluation. Evaluation Metrics of Diversity This is an extended case study based on slice generation. We specially study how different perspectives drive the interpretations from the same set of papers. We calculate the normalized Variance-to-Mean Ratio (VMR) over the BLEURT scores on all established perspectives of a set of papers for each approach. Lower VMR indicates that an approach generates slices conditioned on different perspectives well. ## 4.2 Experiments Of Slice Generation Setup We use **InstructGPT** as the backbone LLM model (Ouyang et al., 2022) for our promptbased approach. The input and output are the same as in Sec. 2.3. The baseline approach directly prompts the LLM with the target output without the proposed hierarchical prompting schema. Results The BLEURT results in Tab. 1 show that the generation with prompt engineering outperforms that without by a large margin (233%). This result validates our pipeline in abstract understanding and perspective-based interpreting; see representative slices in Appx. C.2. ## 4.3 Experiments Of Connection Generation Setup We use **InstructGPT** as the backbone LLM model. The input and output of this evaluation are the same as the connection proposed in Sec. 2.3. The baseline approach directly prompts the LLM with the target output. Results As shown in Tab. 1, our connection generation module surpasses the baseline approach in CR by a large margin (22%). It means more logical connections are generated by our approach and thus contribute to more entailment predictions. See representative connections in Appx. C.3. ## 4.4 Experiments On Diverse Perspectives Setup We use **InstructGPT** as the backbone LLM model for both the slice and the connection generation modules. The baseline approach adopts the slice and connection generation modules without the proposed schema. Results The VMR results in Tab. 1 show that PersLEARN generates slices of richly diverse perspectives, surpassing the baseline by a large margin (79%). We present some examples of the interpretations of different perspectives; see representative slices in Appx. C.4. ## 5 Discussion We present PersLEARN to facilitate scientific research training by explicitly cultivating perspectives. Human study shows that PersLEARN significantly helps junior researchers set up the mindset for jumping out of perspective given by the literature and framing their own ones. Extensive benchmarking shows that our system has the potential to mine perspectives out of diverse domains of literature without much human effort. These experiments suggest that PersLEARN has the potential to support scientific research training in generalfrom explicating one's own perspective to embracing the diverse perspectives of others. Readers can refer to the "Broader Impact" and "Limitation" sections (Sec. 5) for further discussions. ## Ethics Statement The human study presented in this work has been approved by the IRB of Peking University. We have been committed to upholding the highest ethical standards in conducting this study and ensuring the protection of the rights and welfare of all participants. Considering that the workload of the procedure for participants is relatively high among all human studies, we paid the participants a wage of $14.6/h, which is significantly higher than the standard wage (about $8.5/h). Every expert was paid $240 for grading the 24 review paragraphs composed by the participants. We have obtained informed consent from all participants, including clear and comprehensive information about the purpose of the study, the procedures involved, the risks and benefits, and the right to withdraw at any time without penalty. Participants were also assured of the confidentiality of their information. Any personal data collected (including name, age, and gender) was handled in accordance with applicable laws and regulations. ## Broader Impact The underlying impact of the mindset brought by PersLEARN goes beyond research training toward science education in general. Specifically, PersLEARN provides the infrastructure for further investigation in two aspects: (1) embracing the diverse perspectives of the same scientific topic to construct a stereoscopic understanding of the topic; (2) facilitating the communication between junior researchers with different mindsets. The broader impact is analogous to the classic fable *Blind men and an elephant*, where each man interpreted the elephant differently because they were standing on different perspectives. Though this has been a metaphor complaining that science is limited by observation (Heisenberg, 1958), it highlights the virtue of scientific research—focused, and every young researcher understands and interprets science from a focused perspective. Hence, to gain a more comprehensive view of the elephant, the blind men may put their understandings of it together and then try to synthesize it based on their perspectives. In contrast, a sighted person may view the elephant from a distance and capture a holistic view at firstshe ends up with a superficial understanding of the elephant if not selecting a perspective and going close to the elephant, like the blind. Thus, by embracing diverse perspectives (i.e., visualizing the perspective frames in a hub), one gets a stereoscopic view and, more importantly, a deeper understanding of the scientific topic. Moreover, when the metaphorical *blind men* in the fable attempt to articulate their distinct perspectives, they may be hindered by the gap between mindsets. To exemplify, individual might struggle to comprehend the concept of a "fan", which in their perception, the elephant appears to resemble. This suggests that the communication of science should be executed in a listener-aware way and that the speaker's perspective should be transformed (i.e., by changing the terms used in slices and connections) to its analogical equivalent in the listener's mindset. Thus, science can be communicated easily, facilitating its transparency, reliability, and the chances of crossdomain collaboration. In summary, our framework of scientific perspective may bring science education to a future with better student-centered considerations (Leshner, 2018). ## Limitations As a preliminary work, the design and evaluation of PersLEARN come with limitations, leading to further investigations: - Can we construct a larger scale dataset of explicit perspective frames of the literature for more fields in the sciences, such as biology, sociology, etc.? - Can we fine-tune LLMs on the larger dataset to obtain better performance on slice and connection generation? - Can we carry out a human study at a larger temporal scale, say during one semester, to track the progress of students using PersLEARN ? With many questions unanswered, we hope to facilitate research training and science education in a broader way. ## Acknowledgements The authors would like to thank Dr. Yujia Peng, Yuxi Wang, and Zili Li for their kind help in setting up the human study. This work is supported by the Institute for Artificial Intelligence at Peking University. W.H. is supported by the Talent Fund of Beijing Jiaotong University (2023XKRC006). All authors declare no conflict of interest. ## References Gabriel Abend. 2008. The meaning of 'theory'. *Sociological Theory*, 26(2):173–199. Peter J Aubusson, Allan G Harrison, and Stephen M Ritchie. 2006. *Metaphor and analogy: Serious* thought in science education. Springer. Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3 data-driven documents. IEEE Transactions on Visualization and Computer Graph, 17(12):2301– 2309. Rodger W Bybee. 2006. Scientific inquiry and science teaching. Scientific inquiry and nature of science: Implications for teaching, learning, and teacher education, pages 1–14. Susan Carey. 1985. *Conceptual change in childhood*. MIT Press. Susan Carey. 1986. Cognitive science and science education. *American Psychologist*, 41(10):1123. Susan Carey. 2000. Science education as conceptual change. *Journal of Applied Developmental Psychology*, 21(1):13–19. Susan Carey. 2009. *The Origin of Concepts*. Oxford University Press. Karin Knorr Cetina. 1999. *Epistemic cultures: How the* sciences make knowledge. Harvard University Press. Jean-luc Doumont. 2014. Structuring Your Scientific Paper. In *English Communication for Scientists*. Nature. Jiafei Duan, Arijit Dasgupta, Jason Fischer, and Cheston Tan. 2022. A survey on machine learning approaches for modelling intuitive physics. In International Joint Conference on Artificial Intelligence. Richard A Duschl and Richard E Grandy. 2008. Teaching scientific inquiry: Recommendations for research and implementation. BRILL. Mark Edmonds, Feng Gao, Hangxin Liu, Xu Xie, Siyuan Qi, Brandon Rothrock, Yixin Zhu, Ying Nian Wu, Hongjing Lu, and Song-Chun Zhu. 2019. A tale of two explanations: Enhancing human trust by explaining robot behavior. *Science Robotics*, 4(37). David K Farkas. 1985. The concept of consistency in writing and editing. Journal of Technical Writing and Communication, 15(4):353–364. Samuel J Gershman, Eric J Horvitz, and Joshua B Tenenbaum. 2015. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. *Science*, 349(6245):273–278. Alison Gopnik. 1994. The theory theory. In *Mapping* the mind: Domain specificity in cognition and culture, pages 257–293. Cambridge University Press. Alison Gopnik and Andrew N Meltzoff. 1997. Words, thoughts, and theories. MIT Press. Ulf Grenander. 2012. *A calculus of ideas: a mathematical study of human thought*. World Scientific. Thomas L Griffiths, Falk Lieder, and Noah D Goodman. 2015. Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. *Topics in Cognitive Science*, 7(2):217–229. John Hayes. 2012. Modeling and remodeling writing. Written communication, 29:369–388. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. *arXiv preprint arXiv:2111.09543*. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In *International* Conference on Learning Representations. Werner Heisenberg. 1958. Physics and philosophy: The revolution in modern science. *Physics Today*, 11(9):36. Paul Jen-Hwa Hu, Pai-Chun Ma, and Patrick YK Chau. 1999. Evaluation of user interface designs for information retrieval systems: a computer-based experiment. *Decision Support Systems*, 27(1-2):125–143. Philip C Jackson Jr. 2018. Toward beneficial humanlevel ai... and beyond. In *AAAI Spring Symposia*. Lin Jinxian. 2020. Application of guiding design in ui design of mobile terminal. In 2020 IEEE International Conference on Innovation Design and Digital Technology (ICIDDT). Jannis Kallinikos and Robert Cooper. 1996. Writing, rationality and organization: An introduction. *Scandinavian Journal of Management*, 12:1–6. James R Kubricht, Keith J Holyoak, and Hongjing Lu. 2017. Intuitive physics: Current research and controversies. *Trends in Cognitive Sciences*, 21(10):749– 759. Thomas S Kuhn. 1970. *The structure of scientific revolutions*. University of Chicago Press: Chicago. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. 2017. Building machines that learn and think like people. Behavioral and Brain Sciences, 40:e253. Bruno Latour. 1987. Science in action: How to follow scientists and engineers through society. Harvard University Press. Bruno Latour and Steve Woolgar. 1986. Laboratory life: The construction of scientific facts. Princeton University Press. Bruno Latour et al. 1999. *Pandora's hope: Essays on* the reality of science studies. Harvard University Press. Alan I. Leshner. 2018. Student-centered, modernized graduate stem education. *Science*, 360(6392):969– 970. Richard L Lewis, Andrew Howes, and Satinder Singh. 2014. Computational rationality: Linking mechanism and behavior through bounded utility maximization. *Topics in Cognitive Science*, 6(2):279–311. Xiangci Li, Biswadip Mandal, and Jessica Ouyang. 2022. Corwa: A citation-oriented related work annotation dataset. In *Proceedings of the 2022 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Falk Lieder and Thomas L Griffiths. 2020. Resourcerational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43:e1. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In *Text summarization* branches out. Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kinney, and Daniel S Weld. 2020. S2orc: The semantic scholar open research corpus. In Annual Meeting of the Association for Computational Linguistics. Helen E Longino. 1990. *Science as social knowledge:* Values and objectivity in scientific inquiry. Princeton University Press. Michael Lynch. 1993. Scientific practice and ordinary action: Ethnomethodology and social studies of science. Cambridge University Press. Mahdi H Miraz, Peter S Excell, and Maaruf Ali. 2016. User interface (ui) design issues for multilingual users: a case study. *Universal Access in the Information Society*, 15:431–444. Delia Neuman. 2014. Qualitative research in educational communications and technology: A brief introduction to principles and procedures. Journal of Computing in Higher Education, 26:69–86. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, et al. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. Mark Sadoski, Ernest T Goetz, and Maximo Rodriguez. 2000. Engaging texts: Effects of concreteness on comprehensibility, interest, and recall in four text types. *Journal of Educational Psychology*, 92:85. Roger C Schank, Tamara R Berman, and Kimberli A Macpherson. 1999. Learning by doing. Instructional-design theories and models: A new paradigm of instructional theory, 2(2):161–181. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In *Annual Meeting of the Association for Computational Linguistics*. Yu-Zhe Shi, Manjie Xu, John E. Hopcroft, Kun He, Joshua B Tenenbaum, Song-Chun Zhu, Ying Nian Wu, Wenjuan Han, and Yixin Zhu. 2023. On the complexity of bayesian generalization. In *International* Conference on Machine Learning. Paul Thagard. 1992. Analogy, explanation, and education. *Journal of Research in Science Teaching*, 29(6):537–544. Jan Van den Akker. 1999. Principles and methods of development research. *Design Approaches and Tools* in Education and Training, pages 1–14. Guoying Wen. 2021. Research on color design principles of ui interface of mobile applications based on vision. In *2021 IEEE International Conference on* Advances in Electrical Engineering and Computer Applications (AEECA). Joseph Williams. 1990. Toward clarity and grace. Chicago: The University of Chicago. Luciana AM Zaina, Helen Sharp, and Leonor Barroca. 2021. Ux information in the daily work of an agile team: A distributed cognition analysis. International Journal of Human-Computer Studies, 147. Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, et al. 2020. Dark, beyond deep: A paradigm shift to cognitive ai with humanlike common sense. *Engineering*, 6(3):310–345. ## A Implementation Details A.1 Prompting Schema Slice generation We specifically engineer a prompting schema in a hierarchical fashion. First, we parse the seed idea to identify the specific field and domain: - {seed idea} + What fields and domains does the article focus on? Only list the name. ## Then, We Use The Following Prompt To Detect Evidence: - {paper} + Which sentences in the text are about {fields}? List the original sentences. Finally, we match the following prompts with the parsed terms to generate an interpretation: - {evidence} + What are the motivations in the text for studying {fields}? - What methods and approaches in the text are used to study {fields}? - what theories, models, and methods in the text are proposed to study {fields}? - what results and conclusions in the text related to {fields} are drawn? - what results and conclusions in the text related to {fields} are drawn? - what implications or suggestions in the text for future research of {fields} are advocated? ## Connection Generation Our Engineered Prompt Comes In A Selective Fashion. - What are the differences (improve, alternate, compete) between the work {slice_1} and another work {slice_2} on motivations, methods, results, or conclusions to study {fields}? - What are the similarities (inspire, parallel) between the work {slice_1} and another work {slice_2} on motivations, methods, results, or conclusions to study {fields}? ## A.2 Alternate Approach Slice Generation The Baseline Prompt Is Simply A Direct Prompt. - {evidence} + What interpretation about {fields} can we get from the text? ## Connection Generation The Baseline Prompt Is Simply A Direct Prompt. - What are the differences and similarities between the work {slice_1} and another work {slice_2} on studying {fields}? ## B Experimental Details B.1 Instructions For Participants Read the abstract, introduction, discussion, and conclusion sections of the following article, and write a short review entitled "Physics-based reasoning" in a txt file using the given process (see the tutorial for instructions; only for the experimental group). The experiments will last for one hour. You can use the Internet to help you write. 1 Holistic 3d Scene Parsing and Reconstruction from a Single RGB Image. ECCV, 2018. 2 Scene Understanding by Reasoning Stability and Safety. IJCV, 2015. 3 Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. NeurIPS, 2015. 4 Physics 101: Learning Physical Object Properties from Unlabeled Videos. BMVC, 2016. 5 Understanding Tools: Task-oriented Object Modeling, Learning, and Recognition. CVPR, 2015. ## B.2 Interfaces In The Procedures We show the screenshots of user interactions during the experiments, from entering an input perspective Fig. A1, to selecting papers Fig. A2, generating pieces of evidence Fig. A3, generating slices Fig. A4, generating connections Fig. A5, and browsing the perspective frame Fig. A6. To note, though these steps are demonstrated in a monotonic order here, every step is repeatable and extensible. ## B.3 Interview Questions We interview participants on the following two questions. - How does PersLEARN help you compose reviews? - How can PersLEARN be improved? ## C Extended Results C.1 Perspective Paragraphs By Subjects We show anonymized representative examples from both the control group and the experimental group of the human study. All examples are kept original without any revision, including typos. Colored texts are used to **highlight** interpretation and relation respectively facilitated by slices and connections of PersLEARN in the experimental group. The top three paragraphs from the experimental group **with** pertinent interpretations and logical relations: \#1: Physics-based reasoning has been used for two aims. The first is to learn the physical properties of an object. For example, Galileo[3] and Physics 101[4] learn physical properties like mass and density from videos. Bo Zheng et al[2] learn stability and safety of objects in a scene. The second is to enrich the object representation by incorporating physical features. The enriched representation is then used to assist other visual tasks. Siyuan Huang et al[1] design a physically enriched HSG representation of 3D scene structure in the single-view 3D reconstruction task. Yixin Zhu et al[5] use a representation consisting four physical-functional components in object recognition task. Relatively brief runs of MCMC can drive simultions in physics engine to fit the key features of visual image, which has a simlilarly accurate outcome comparing with human intutions[3]. Further study expand the abilities of learning the basic features of scenes, which makes the 3D parsing and reconstruction real. HSG can establish a joint distribution over the functional and geometric space of scenes, which capatures the latent human context, geometric constraints and physical constraints[1]. By implenting a new framwork, visual system learns the tools properties, the using methods, and the later action to do some related works, which not only recognize the appearance, but also explain the physical mechanisms[5]. \#3: Conventional scene understanding methods mostly neglect the object's physical properties, rendering their weak ability of accurately understanding the scenes. To addess this issue, Zheng et al. [2] proposed a novel 3D scene understanding approach from a new perspective of reasoning object stability and safety using intuitive mechanics. As a step further, Huang et al. [1] proposed a computational framework to jointly parse a single RGB image and reconstruct a holistic 3D configuration, jointly considering latent human context, geometric constraints, and, physical constraints to guarantee the physical plausibility. The top three paragraphs from the control group which **fail to** interpret and organize from the perspective of physics-based reasoning: \#1: The five papers all concerns over a main topic, that is how to effectively train artificial intelligence to precieve the outside physical world and afterwards giving different forms of feedback or guidance on new circumstances. The first and second are generally about understanding scenes but have some differences in their domains .The first one using a RGB Image to generate 3D scend applying the Markov chain Monte Carlo (MCMC) method[1], and the second focusing on building novel algorithms which are able to reason object stability and safety using intuitive mechanics with the representation of the disconnecty graphs and disturbance field[2]. They all propose a new perspective for machines to logically and correctly process the human-understood information. #2: Machine learning and deep learning are applied to study physical object properties. In 2015, Jiajun Wu proposed a generative model for solving problems of physical scene understanding from real-world videos and imag[3]. As the same time, Yixin Zhu presented a new framework – task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as "tools"[5]. Later in 2016, Jiajun Wu proposed an unsupervised model to learn physical object properties from unlabeled video[4]. Also, physics-based reasoning palys an important role in 3D parse and holistic 3D scene understanding. Bo Zheng presented a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics[2]. Siyuan Huang proposed a computational framework to jointly parse a single RGB image and reconstruct a holistic 3D configuration composed by a set of CAD models using a stochastic grammar model[1]. #3: The complexity and richness of human vision are not only reflected by the ability to recognize visible objects, but also to reason about the latent actionable information, including inferring physical object properties[4] or latent human context as the functionality of a scene[1]. But how does human vision achieve such complexity and richness ? Some studies propose a model based on physics principle for understanding these mechanisms[2,3,4]. Even other sudies consider more complex situations, where humans have tasks[5] and other hidden human context[1]. ## C.2 Slices Generated By Machines We present several examples of the generated slices with their BLEURT scores from the pipeline with and without prompt engineering. \#1: w/o: -1.132 The article suggests that deep learning, adversarial attacks, and countermeasures are independent of the data miner's activities. #1: w/: 0.326 The article focuses on the motivations for studying deep learning, adversarial attacks, and countermeasures in order to better understand and defend against data manipulation by adversaries. The paper uses deep learning, adversarial attacks, and countermeasures to study the problem. The paper proposes a formal framework and algorithms for studying deep learning, adversarial attacks, and countermeasures. It also discusses theories, models, and methods related to these fields and domains. The article suggests that future research should focus on developing deep learning systems that are more robust to adversarial attacks and require less human intervention to keep up with them. #2: w/o: -1.090 The article suggests that understanding the cognitive development, infant development, and developmental psychology of others is essential for successful social interactions. #2: w/: 0.616 The article uses a variety of methods and approaches to study the fields of cognitive development, infant development, and developmental psychology. These include observational studies, experiments, and surveys. The article also uses a variety of theoretical frameworks, such as Piaget's theory of cognitive development, to analyze the data. The strengths of the text are that it provides evidence of infants' anticipatory responses to others' actions from an early age, and it also provides evidence that infants attend prospectively to others' actions. The limitations of the text are that it does not clarify whether infants anticipated the goal per se because the goal and pattern of movement were confounded. #3: w/o: -0.735 Sentiment analysis, news categorization, question answering, and natural language inference are all techniques used to analyze text and extract meaning from it. #3: w/: 0.422 The methods and approaches used to study sentiment analysis, news categorization, question answering, and natural language inference in the text include supervised learning, unsupervised learning, deep learning, natural language processing, and machine learning. The theories, models, and methods proposed to study sentiment analysis, news categorization, question answering, and natural language inference in the text are Convolutional Neural Networks (CNNs) and SVMs with well-engineered features. ## C.3 Connections Generated By Machines Our pipeline can discover the similarities and differences between slices. Below are some representative connections the model generatesgenerated by the model with and without prompt engineering. \#1: w/o The two works have similar motivations for studying Knowledge Graph Representation Learning, Knowledge Acquisition and Completion, Temporal Knowledge Graph, and Knowledge-Aware Applications. The methods used in the two works are also similar, with Knowledge Graph Representation Learning typically involving graph embedding, graph convolutional networks, and graph neural networks, Knowledge Acquisition and Completion typically involving rule-based approaches, ontology-based approaches, and machine learning-based approaches, Temporal Knowledge Graphs typically involving temporal graph embedding, temporal graph convolutional networks, and temporal graph neural networks, and Knowledge-Aware Applications typically involving knowledge-aware recommendation systems, knowledge-aware question answering systems, and knowledge-aware search systems. The results and conclusions of the two works are also similar, with Knowledge Graph Representation Learning theories, models, and methods including graph embedding, graph convolutional networks, graph attention networks, and graph generative models, Knowledge Acquisition and Completion methods including rule-based methods, ontology-based methods, and machine learning-based methods, Temporal Knowledge Graph methods including temporal graph embedding, temporal graph convolutional networks, and temporal graph attention networks, and Knowledge-Aware Applications including question answering, recommendation systems, and natural language processing. #1: w/ The two works have similar motivations, methods, results, and conclusions when it comes to studying Knowledge Graph Representation Learning, Knowledge Acquisition and Completion, Temporal Knowledge Graph, and Knowledge-Aware Applications. The main difference between the two works is that the first work focuses on how knowledge graphs can be used to extract relations from text and can be used for knowledge acquisition and completion, while the second work provides an overview of the topics and suggests that they are related and can be used to develop applications that are aware of knowledge. #2: w/o The motivations for studying Artificial Intelligence, Cognitive Science, Pattern Recognition, Physics, and Psychology in the text are to gain a better understanding of how humans learn and process information, and to develop algorithms that can learn from fewer examples than traditional machine learning algorithms. The methods used in the text are theories, models, and methods from Artificial Intelligence, Cognitive Science, Pattern Recognition, Physics, and Psychology to study simple visual concepts. The results of the text suggest that future research should focus on understanding how people learn new concepts from just one or a few examples, and how people learn such abstract, rich, and flexible representations. The conclusion of the text is that any theory of concept learning should address these questions and provide implications or suggestions for future research in Artificial Intelligence, Cognitive Science, Pattern Recognition, Physics, and Psychology. #2: w/ The two works have similar motivations in studying Artificial Intelligence, Cognitive Science, Pattern Recognition, Physics, and Psychology. The methods used in both works are similar, as they both suggest combining multiple levels of representation with the flexible use of long range context. The results of the first work suggest that deep Long Short-term Memory RNNs are a powerful model for sequential data and can be used for tasks such as cursive handwriting recognition and phoneme recognition. The results of the second work suggest that Artificial Intelligence, Cognitive Science, Pattern Recognition, Physics, and Psychology have not been able to replicate two aspects of human conceptual knowledge. The conclusions of both works are that combining multiple levels of representation with the flexible use of long range context can lead to improved results. ## C.4 Slices For Different Perspectives We show some interpreted slices of two cited papers (indexed by \#) from different perspectives (*italicized*). These two papers are: 1 Heuristic judgment of mass ratio in two-body collisions. Perception & Psychophysics 56 (1994): 708-720. 2 Learning a theory of causality. Psychological Rreview 118.1 (2011): 110. \#1: *Intuitive Physics: Current Research and Controversies* Intuitive Physics, Perception, Reasoning, and Artificial Intelligence are all studied in order to gain a better understanding of how the world works and how to create more efficient and effective systems. These fields are also studied in order to develop new technologies and applications that can be used to improve the lives of people. Methods and approaches used to study Intuitive Physics, Perception, Reasoning, and Artificial Intelligence include computational modeling, cognitive psychology, neuroscience, and machine learning.Theories, models, and methods proposed to study Intuitive Physics, Perception, Reasoning, and Artificial Intelligence include Bayesian inference, probabilistic graphical models, deep learning, reinforcement learning, and evolutionary algorithms. The results and conclusions drawn from the text related to Intuitive Physics, Perception, Reasoning, and Artificial Intelligence are that humans have an innate ability to understand physical concepts and use them to make decisions and solve problems. This suggests that humans have an intuitive understanding of physics that can be used to inform Artificial Intelligence algorithms. Additionally, the text suggests that humans are capable of making decisions and solving problems based on their perception of the physical world, and that this ability can be used to inform Artificial Intelligence algorithms. \#1: *Mind Games: Game Engines as an Architecture for Intuitive Physics* The motivations for studying Artificial Intelligence, game development, and physics simulation are to gain a better understanding of how these technologies work, to develop new applications and technologies, and to explore the potential of these technologies for solving real-world problems. Artificial intelligence, game development, and physics simulation can be studied using a variety of methods and approaches, including machine learning, deep learning, reinforcement learning, evolutionary algorithms, and probabilistic methods. Artificial intelligence, game development, and physics simulation can be studied using a variety of theories, models, and methods. These include machine learning, deep learning, reinforcement learning, evolutionary algorithms, game theory, and physics-based simulations. Strengths of the text for studying Artificial Intelligence, game development, and physics simulation include its comprehensive coverage of the topics, its use of examples to illustrate key concepts, and its clear explanations of complex topics. Limitations of the text include its lack of in-depth coverage of certain topics and its lack of discussion of the latest developments in the field. \#2: *Bayesian Models of Conceptual Development: Learning as Building Models of the World* The motivations in the text for studying Cognitive Development, Core Knowledge, Child as Scientist, Bayesian Program Induction, Computational Advances, Scientific Theories, Intuitive Theories, Biological Evolution, and Cultural Evolution are to gain a better understanding of the principles of causal reasoning and to develop a more comprehensive account of causality. Cognitive Development, Core Knowledge, Child as Scientist, Bayesian Program Induction, Computational Advances, Scientific Theories, Intuitive Theories, Biological Evolution, and Cultural Evolution are all methods and approaches used to study the blessing of abstraction. Cognitive Development, Core Knowledge, Child as Scientist, Bayesian Program Induction, Computational Advances, Scientific Theories, Intuitive Theories, Biological Evolution, and Cultural Evolution are all theories, models, and methods proposed to study Cognitive Development, Core Knowledge, Child as Scientist, Bayesian Program Induction, Computational Advances, Scientific Theories, Intuitive Theories, Biological Evolution, and Cultural Evolution. #2: Intuitive Theories The motivations in the text for studying Cognitive science, psychology, and philosophy are to gain a better understanding of the principles of causal reasoning and to develop a description of the principles by which causal reasoning proceeds. Cognitive science, psychology, and philosophy are studied using methods and approaches such as logical reasoning, empirical observation, and experimentation. Cognitive science, psychology, and philosophy are studied using theories, models, and methods such as Bayesian networks, causal inference, and counterfactual reasoning. The results and conclusions drawn from the text related to Cognitive science, psychology, and philosophy are that abstract reasoning can be used to quickly learn causal theories, and that this can be beneficial in certain situations. ![17_image_0.png](17_image_0.png) ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png)
li-etal-2023-lavis
{LAVIS}: A One-stop Library for Language-Vision Intelligence
https://aclanthology.org/2023.acl-demo.3
We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, video-language models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this technical report, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks.
# Lavis: A One-Stop Library For Language-Vision Intelligence Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, Steven C.H. Hoi Salesforce Research Open-source repository: **https://github.com/salesforce/LAVIS** Supplementary video: **https://youtu.be/0CuRowHu7TA** ## Abstract We introduce LAVIS, an open-source deep learning library for LAnguage-VISion research and applications. LAVIS aims to serve as a one-stop comprehensive library that brings recent advancements in the language-vision field accessible for researchers and practitioners, as well as fertilizing future research and development. It features a unified interface to easily access state-of-the-art image-language, videolanguage models and common datasets. LAVIS supports training, evaluation and benchmarking on a rich variety of tasks, including multimodal classification, retrieval, captioning, visual question answering, dialogue and pre-training. In the meantime, the library is also highly extensible and configurable, facilitating future development and customization. In this paper, we describe design principles, key components and functionalities of the library, and also present benchmarking results across common language-vision tasks. ## 1 Introduction Multimodal content, in particular language-vision data including texts, images and videos are ubiquitous for real-world applications, such as content recommendation, e-commerce and entertainment. There has been tremendous recent progress in developing powerful language-vision models (Su et al., 2020; Lu et al., 2019; Chen et al., 2020; Li et al., 2020; Huang et al., 2021; Li et al., 2021a; Radford et al., 2021; Zhou et al., 2020; Gan et al., 2020; Cho et al., 2021; Zhang et al., 2021; Li et al., 2022b; Zhu and Yang, 2020; Bain et al., 2021; Xu et al., 2021; Lei et al., 2021; Li et al., 2022a). However, training and evaluating these models across tasks and datasets require domain knowledge and are not always welcoming to incoming researchers and practitioners. This is mainly due to inconsistent interfaces across models, datasets and task evaluations, and also the duplicating yet 31 ![0_image_0.png](0_image_0.png) non-trivial efforts to prepare the required experiment setup. To make accessible the emerging language-vision intelligence and capabilities to a wider audience, promote their practical adoptions, and reduce repetitive efforts in future development, we build LAVIS (short for LAnguage-VISion), an open-source library for training, evaluating stateof-the-art language-vision models on a rich family of common tasks and datasets, as well as for offthe-shelf inference on customized language-vision data. Figure 1 shows the overall design of LAVIS. Important features of LAVIS include (i) **Unified interface and modular design**. Key components in the library are organized using a unified and modular design. This allows effortless off-the-shelf access to individual components, swift development and easy integration of new or external components. The modular design also eases model inferences, such as multimodal feature extraction. (ii) Comprehensive support of image-text, videotext tasks and datasets. LAVIS supports a growing list of more than ten common language-vision tasks, across over 20 public datasets. These tasks and datasets provide a comprehensive and unified benchmark for evaluating language-vision models. (iii) **State-of-the-art and reproducible languagevision models**. The library enables access to over 30 pre-trained and task-specific fine-tuned model checkpoints of 5 foundation models: ALBEF (Li et al., 2021a), BLIP (Li et al., 2022b), BLIP2 (Li et al., 2023b), CLIP (Radford et al., 2021) and ALPRO (Li et al., 2022a), as well as state-of-the-art language-vision methods such as PnP-VQA (Tiong et al., 2022), Img2Prompt (Guo et al., 2022). These models achieve competitive performance across multiple tasks, representing the up-to-date development status of the language-vision research. We also provide training, evaluation scripts and configurations to facilitate reproducible language-vision research and adoption. (iv) **Resourceful and useful toolkit**. In addition to the core library functionalities, we also provide useful resources to reduce the learning barriers for the language-vision research. This includes automatic dataset downloading tools to help prepare the supported datasets, a GUI dataset browser to help preview downloaded datasets and dataset cards documenting sources, supported tasks and leaderboards. ## 2 Related Work Table 1 summarizes the comparisons between LAVIS' key features with those of other libraries. Most related libraries include MMF (Singh et al., 2020), UniLM (uni, 2020), X-modaler (Li et al., 2021b) and TorchMultimodal (tor, 2022). - MMF is a comprehensive multimodal framework encapsulating many language-vision models and datasets. It implements modular interface for training and evaluation. However, it consists of mostly task-specific architectures. Besides showing relatively inferior performance, these models are usually not easy to transfer across tasks. Among the included foundation models (Li et al., 2019; Chen et al., 2020; Zhang et al., 2021; Li et al., 2021a) in MMF, few fully supports finetuning or benchmarking on the extended list of downstream tasks. In contrast, considering that pre-trained foundation models prevail across overwhelmingly many tasks and datasets with more principal and unified architectures, our library focuses on pre-trained models and their task-specific variants instead. - UniLM was initiated for developing large language models, and recently also aggregates multiple standalone repositories of multimodal models. Yet, support for multimodal models in UniLM is limited in its current development status. Moreover, UniLM does not provide unified or modular interfaces to allow easy access or reproduction. - X-modaler supports a limited number of tasks and datasets, which are not as comprehensive as LAVIS. Besides, similar to MMF, models in X-modaler are also mostly in task-specific architectures. The few supported foundation model, e.g. (Chen et al., 2020), achieves inferior results than models in LAVIS. - A concurrent library TorchMultimodal (tor, 2022) promotes modular development of language-vision models. Our library supports a wider range of tasks and datasets than TorchMultimodal while being more comprehensive and resourceful. Other open-source implementations of individual models exist (Chen et al., 2020; Li et al., 2020; Lu et al., 2019; Radford et al., 2021; Gan et al., 2020; Lei et al., 2021), yet do not provide centralized access. In summary, in contrast to previous efforts, our library stands out by providing *easier* access to stronger models on comprehensively *many* tasks and datasets. With this effort, we hope to significantly reduce the cost and effort to leverage and benchmark existing multimodal models, as well as to develop new models. ## 3 Supported Tasks, Datasets And Models Table 3 summarizes the supported tasks, datasets and models in LAVIS. In particular, we prioritize tasks that are standard, widely adopted for evaluation, and with publicly available datasets. For image-text tasks, the library implements image-text retrieval, image captioning, visual question answering (VQA), visual dialogue, visual entailment (VE), natural language visual reasoning (NLVR2) and image classification. For video-text tasks, LAVIS currently support video-text retrieval and video question answering (VideoQA). There are in total over 20 public datasets supported, including MSCOCO (Lin et al., 2014), Flickr30k (Plummer et al., 2015), VQAv2 (Goyal et al., 2017), OK-VQA (Marino | LAVIS (Ours) | MMF | UniLM | X-modaler | TorchMultimodal | | | |--------------------------------------------|---------------------------------------------------------|------------------------------------------------------------------------|-------------|-------------------|----|----| | Unified Model and Dataset Interface | ✓ | | | | | | | Modular Library Design | ✓ | ✓ | ✓ | ✓ | | | | Pre-trained Model Checkpoints | ✓ | | | | | | | Task-specific Finetuned Model Checkpoints | ✓ | ✓ | | | | | | Modalities | Image-Text | ✓ | ✓ | ✓ | ✓ | ✓ | | Video-Text | ✓ | ✓ | ✓ | | | | | End2end Pre-training | ✓ | ✓ | ✓ | | | | | Multimodal Retrieval | ✓ | ✓ | ✓ | | | | | Captioning | ✓ | ✓ | ✓ | | | | | Visual Question Answering | ✓ | ✓ | ✓ | | | | | Multimodal Classification | ✓ | ✓ | | | | | | Instructed Zero-shot Generation | ✓ | | | | | | | Visual Dialogue | ✓ | | | | | | | Multimodal Feature Extraction | ✓ | | | | | | | Tasks | Benchmarks | ✓ | | | | | | Dataset Auto-downloading | ✓ | ✓ | | | | | | Dataset Browser | ✓ | | | | | | | GUI Demo | ✓ | | | | | | | Dataset Cards | ✓ | | | | | | | Toolkit | Table 2: Supported tasks, datasets and models in LAVIS. | | | | | | | Supported Tasks | Supported Models | Supported Datasets | | | | | | Image-text Pre-training | ALBEF, BLIP, BLIP2, InstructBLIP | COCO, Visual Genome, SBU Caption, Conceptual Captions (3M, 12M), LAION | | | | | | Image-text Retrieval | ALBEF, BLIP, BLIP2, CLIP | COCO, Flickr30k | | | | | | Visual Question Answering | ALBEF, BLIP, BLIP2, InstructBLIP | VQAv2, OKVQA, A-OKVQA, GQA | | | | | | Image Captioning | BLIP, BLIP2, InstructBLIP | COCO Caption, NoCaps | | | | | | Image Classification | CLIP | ImageNet | | | | | | Natural Language Visual Reasoning (NLVR2 ) | ALBEF, BLIP | NLVR2 | | | | | | Visual Entailment | ALBEF | SNLI-VE | | | | | | Visual Dialogue | BLIP, InstructBLIP | VisDial | | | | | | Video-text Retrieval | ALPRO, BLIP | MSRVTT, DiDeMo | | | | | | Video Question Answering | ALPRO, BLIP, InstructBLIP | MSRVTT-QA, MSVD-QA | | | | | | Video Dialogue | BLIP | AVSD | | | | | et al., 2019), A-OK-VQA (Shevchenko et al., 2021), GQA (Hudson and Manning, 2019), Visual Genome (Krishna et al., 2017), ImageNet (Deng et al., 2009), NoCaps (Agrawal et al., 2019), Conceptual Captions (Sharma et al., 2018; Changpinyo et al., 2021), SBU-caption (Ordonez et al., 2011), LAION (Schuhmann et al., 2021), NLVR2(Suhr et al., 2019), SNLI-VE (Bowman et al., 2015), VisDial (Das et al., 2017), AVSD (Alamri et al., 2019), MSRVTT (Xu et al., 2016), MSVD (Xu et al., 2017), DiDeMo (Anne Hendricks et al., 2017) and their task-specific variants. LAVIS currently supports 6 foundation models, i.e. ALBEF (Li et al., 2021a), BLIP (Li et al., 2022b), BLIP2 (Li et al., 2023b), CLIP (Radford et al., 2021), InstructBLIP (Dai et al., 2023) and ALPRO (Li et al., 2022a). In addition, the library also features language-vision methods including PnPVQA (Tiong et al., 2022) and Img2prompt (Guo et al., 2022), and text-to-image generation model BLIP-Diffusion (Li et al., 2023a). These models and methods show strong performance on the aforementioned tasks and datasets, representing the upto-date development status of the language-vision research field. Detailed description can be found in A.1 ## 4 Library Design This section delineates the design of LAVIS as shown in Figure 1. Our key design principle is to provide a simple and unified library to easily (i) train and evaluate the model; (ii) access supported models and datasets; (iii) extend with new models, tasks and datasets. ## 4.1 Description On Each Library Component Key Components In Lavis Include: - **Runners** - lavis.runners module manages the overall training and evaluation lifecycle. It is also responsible for creating required components lazily as per demand, such as optimizers, learning rate schedulers and dataloaders. Currently, RunnerBase implements epoch-based training and RunnerIters implements iteration-based training. - **Tasks** - lavis.tasks module implements concrete training and evaluation logic per task. This includes pre-training and finetuning tasks as listed in Table 3. The rationale to have an abstraction of task is to accommodate taskspecific training, inference and evaluation. For example, evaluating a retrieval model is different from a classification model. - **Datasets** - lavis.datasets module helps create datasets. Specifically, datasets.builders module loads dataset configurations, downloads annotations and builds the dataset; - lavis.datasets.datasets module defines the supported datasets, each is a PyTorch dataset instance. - We also provide automatic dataset downloading tools in datasets/download_scripts to help prepare common public datasets. - **Models** - lavis.models module holds definitions for the supported models and shared model layers. - **Processors** –lavis.processors module handles preprocessing of multimodal input. A processor transforms input images, videos and texts into the desired form that models can consume. - **Common tools and utilities** – lavis.commons module contains shared classes and methods used by multiple other modules. For example, configs module contains classes to store and manipulate configuration files used by LAVIS. In particular, we use a hierarchical configuration design, to allow highly customizable training and evaluation. The registry module serves as a centralized place to manage modules that share the same functionalities. It allows building datasets, models, tasks, and learning rate schedulers during runtime, by specifying their names in the configuration; optims contains definitions of learning rate schedulers; utils contains miscellaneous utilities, mostly IO-related helper functions; ## 4.2 Example Library Usage The design of the library enables easy access to existing models and future development. In this section, we include a few examples to demonstrate some common use cases. Unified interface for data and model loading LAVIS provides unified interface load_dataset and load_model to access supported datasets and models. This is helpful for off-the-shelf use of datasets and model inference etc. In the first example, we show how to load a dataset using the library. ``` from lavis.datasets.builders import load_dataset load_dataset 2 # load a specific dataset 3 ``` ``````ccc_dataset = load_dataset(" `````` 4 # dataset is organized by split names. 5 ``` print(ccco_dataset.keys()) 6 # dict_keys(['train','val','test']) 7 # total number of samples in the training split. 8 ``` print(len(ccco_dataset["train"])) 9 # 56674 10 # peak a random sample 11 print(ccco_dataset["train"][0]) 12 # ['image' : <FOL image.Image image mode= RGB size=648×480>, 13 # 'text_input': 'A woman wearing a net on her head cutting a cake.', 14 # 'image_id': 0) ``` Models and their selected nonprocessor can also be Models and their related preprocessors can also be loaded via a unified interface, which facilitates effortless analysis and inference on custom data. In the following, we show an example that uses a BLIP captioning model to generate image captions. 1 from lavis . models import load_model_and_preprocess 2 \# load model and preprocessors 3 model , vis_procs , _ = load_model_and_preprocess ( 4 name =" blip_caption ", model_type =" base_coco ") 5 \# raw_image is a PIL Image instance 6 raw_image = coco_dataset [" test " ][0][ " image "] 7 \# preprocess a raw input image 8 image = vis_procs [" eval "]( raw_image ) . unsqueeze (0) 9 \# generate caption 10 caption = model . generate ({" image ": image }) 11 \# ['a man riding a motorcycle down a dirt road '] $$\mathtt{t r a c t i o n\;\;m o d e l s}$$ $$\mathbb{T}\cup\mathbb{T}\mathbb{C}$$ $\left(\begin{array}{c}\mbox{\small$\frac{1}{2}$}\\ \mbox{\small$\frac{1}{2}$}\end{array}\right)$ Unified interface for multimodal feature extraction LAVIS supports a unified interface to extract multimodal features. The features are useful especially for offline applications where end-to-end finetuning is not affordable. By changing name and model_type, users can choose to use different model architecture and pre-trained weights. 1 # load feature extraction models and processs model, vis_procs, txt_procs = load_model_and_preprocess( name="blip_feature_extractor" model_type="base" ) # a random instance from coco dataset raw_image = coco_dataset["test"][0][" image"] test = coco_dataset["test"][0][" text_input"] # process the input in image = via_trace["value"](raw_image) 16 feature = model . extract_features ( sample ) **text_input "]** 9 # process the input 10 image = vis_paces["eval "](raw_image). unsqueeze(0) 11 text_input = txt_prices["eval "](text) 12 sample = {"image": image, 13 "text_input ": [text_input]] 14 # extract multimodal features ## 5 Benchmarks And Library Toolkit In this section, we benchmark model performance across tasks and datasets in LAVIS. Then we take our web demo interface to show a few case studies on multimodal content understanding. We also present a GUI dataset browser that helps to preview supported datasets. ## 5.1 Main Results The purpose of the benchmark is two-fold. First, we use the benchmark to validate that our reimplementation faithfully replicates official models. Second, the benchmark also serves as a reference for further development. In Table 5-4, we organize benchmark results by models and compare our replication results with those reported of- | Tasks | Datasets | Impl. | Results | |-----------------------|------------|---------|-----------| | VQAv2 | 78.25 | 78.32 | | | VQA Image | | | | | Captioning Multimodal | NLVR2 | 82.15 | 82.24 | | Classification | | | | Retrieval R1 R5 R10 TR COCO 82.4 95.4 97.9 82.0 95.8 98.1 IR COCO 65.1 86.3 91.8 64.5 86.0 91.7 TR Flickr30k 97.2 99.9 100.0 96.9 99.9 100.0 IR Flickr30k 87.5 97.7 98.9 87.5 97.6 98.9 COCO 39.7 133.3 - 39.7 133.5 23.7 NoCaps - 109.6 14.7 31.9 109.1 14.7 | 82.0 | 95.8 | 98.1 | |--------|--------|--------| | 64.5 | 86.0 | 91.7 | | 96.9 | 99.9 | 100.0 | | 87.5 | 97.6 | 98.9 | | B@4 | CIDEr | SPICE | | 39.7 | 133.5 | 23.7 | | 31.9 | 109.1 | 14.7 | $\blacksquare$ Table 4: Comparison between official and replicated performance using CLIP-ViT-L/336. Note the relative difference is possibly due to the versioning of the model weights. | Tasks | Datasets | Impl. | Results | | |-----------------|------------|-----------|-----------|------| | Retrieval | R1 | R5 | R10 | | | TR | COCO | 58.4 | 81.5 | 88.1 | | 57.2 | 80.5 | 87.8 | | | | IR | COCO | 37.8 | 62.4 | 72.2 | | 36.5 | 60.8 | 71.0 | | | | TR | Flickr30k | 88.0 | 98.7 | 99.4 | | 86.5 | 98.0 | 99.1 | | | | IR | Flickr30k | 68.7 | 90.6 | 95.2 | | 67.0 | 88.9 | 93.3 | | | | Zero-shot Image | val | | | | | Classification | ImageNet | 76.2 76.5 | | | ficially. Experiments are conducted on NVIDIA A100 GPUs. For ALBEF, BLIP, BLIP2 and ALPRO, we reimplement their models in LAVIS based on the official repositories and report finetuning results using their official pre-trained weights. For CLIP models, we integrate a third-party implementation (Ilharco et al., 2021) and report CLIP-ViT-L/336 zero-shot inference results using the official weights (Radford et al., 2021) (Table 4). As can be seen in the ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) tables, our library produce consistent results as reported officially. More benchmarking results with BLIP, ALPRO models can be found in A.2. ## 5.2 Library Resources And Toolkit In addition to the components aforementioned, LAVIS also provides useful toolkit and resources to further ease development. This includes pretrained and finetuned model checkpoints, automatic dataset downloading tools, a web demo and a dataset browser. Pre-trained and finetuned model checkpoints. We include pre-trained and finetuned model checkpoints in the library. This promotes easy replication of our experiment results and to repurpose pre-trained models for other applications. Model checkpoints are downloaded automatically upon loading models. Web demo. As shown in Figure 2, we develop a GUI-based web demo, which aims to provide a user-friendly interface to explore various multimodal capabilities. Currently the demo supports the following functionalities: (i) *image captioning*: produces a caption in natural language to describe an input image; (ii) *visual question answering*: answer natural language questions regarding the input image; (iii) *multimodal search*: search images in a gallery given a text query; (iv) *text visualization*: given an input image and a text caption, produces GradCam (Selvaraju et al., 2017) for each text token on the image; (v) zero-shot multimocal classification: classify an input images into a set of input labels in text. (vi) Thanks to the modular design of LAVIS, one can easily extend the demo with new functionalities, such as *text-to-image generation*, as shown in the Figure 2. Automatic dataset downloading and browsing. Preparing language-vision datasets for pre-training and fine-tuning incurs much duplicating effort. To this end, LAVIS provides tools to automatically download and organize the public datasets, so that users can get access to the common datasets easier and quicker. In addition, we develop a GUI dataset browser, as shown in Figure 3, that helps users to rapidly gain intuitions about the data they use. ## 6 Conclusion And Future Work We present LAVIS, an open-source deep learning library for language-vision research and applications. The library is designed to provide researchers and practitioners with easier and comprehensive access to state-of-the-art multimodal capabilities, The library also features a unified interface and extensible design to promote future development. Besides, the library also features extensive access to pre-trained weights and useful resources to reduce duplicating replication efforts. With these features, we expect LAVIS to serve as a one-stop library in multimodal AI for a wider audience. We continue to actively develop and improve LAVIS. In future releases, our priorities are to include more language-vision models, tasks and datasets to the library. We also plan to add more parallelism support for scalable training and inference. While we will maintain LAVIS in the long term, we invite contributions from the open-source community to join this evolving effort. ## Broader Impact And Responsible Use LAVIS can provide useful capabilities for many real-world multimodal applications. It features easy, unified and centralized access to powerful language-vision models, facilitating effective multimodal analysis and reproducible research and development. We encourage researchers, data scientists, and ML practitioners to adopt LAVIS in real-world applications for positive social impacts, e.g. efficient and environment-friendly large-scale multimodal analysis. However, LAVIS may also be misused. We encourage users to read detailed discussion and guidelines for building responsible AI, e.g. (Baxter, 2022). In particular, LAVIS should not be used to develop multimodal models that may expose unethical capabilities. It is also important to note that that models in LAVIS provide no guarantees on their multimodal abilities; incorrect or biased predictions with outof-date information may be observed. In particular, the datasets and pretrained models utilized in LAVIS contain socioeconomic biases which may result in misclassification and other unwanted behaviors such as offensive or inappropriate speech. We strongly recommend that users review the pretrained models and overall system in LAVIS before practical adoption. We plan to improve the library by investigating and mitigating these potential biases and inappropriate behaviors in the future. ## References 2020. Large-scale self-supervised pre-training across tasks, languages, and modalities. https://github. com/microsoft/unilm. 2022. Torchmultimodal (alpha release). https:// github.com/facebookresearch/multimodal. Harsh Agrawal, Peter Anderson, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. nocaps: novel object captioning at scale. In *ICCV*, pages 8947–8956. Huda Alamri, Vincent Cartillier, Abhishek Das, Jue Wang, Anoop Cherian, Irfan Essa, Dhruv Batra, Tim K Marks, Chiori Hori, Peter Anderson, et al. 2019. Audio visual scene-aware dialog. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pages 7558–7567. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In *Proceedings of the IEEE international conference* on computer vision, pages 5803–5812. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. 2021. Frozen in time: A joint video and image encoder for end-to-end retrieval. In *ICCV*. Kathy Baxter. 2022. Ai is everywhere - but are you building it responsibly? https://www.salesforce.com/blog/ build-ethical-ai/?hasLoggedIn=true. Gedas Bertasius, Heng Wang, and Lorenzo Torresani. 2021. Is space-time attention all you need for video understanding? In *ICML*. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In *EMNLP*, pages 632–642. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12M: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In *CVPR*. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: universal image-text representation learning. In *ECCV*, volume 12375, pages 104–120. Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text generation. *arXiv preprint arXiv:2102.02779*. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In *CVPR*, pages 1080–1089. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. In *NeurIPS*. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In *CVPR*, pages 6325– 6334. Liangke Gui, Borui Wang, Qiuyuan Huang, Alex Hauptmann, Yonatan Bisk, and Jianfeng Gao. 2021. Kat: A knowledge augmented transformer for vision-andlanguage. *arXiv preprint arXiv:2112.08614*. Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, and Steven CH Hoi. 2022. From images to textual prompts: Zero-shot vqa with frozen large language models. *arXiv preprint arXiv:2212.10846*. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, and Jianlong Fu. 2021. Seeing out of the box: End-to-end pre-training for visionlanguage representation learning. arXiv preprint arXiv:2104.03135. Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In *CVPR*, pages 6700–6709. Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. 2021. Openclip. If you use this software, please cite it as below. Amita Kamath, Christopher Clark, Tanmay Gupta, Eric Kolve, Derek Hoiem, and Aniruddha Kembhavi. 2022. Webly supervised concept expansion for general purpose vision models. *arXiv preprint* arXiv:2202.02317. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. *IJCV*, 123(1):32–73. Hung Le, Nancy Chen, and Steven Hoi. 2022. VGNMN: Video-grounded neural module networks for videogrounded dialogue systems. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3377–3393, Seattle, United States. Association for Computational Linguistics. Hung Le, Nancy F. Chen, and Steven Hoi. 2021. Learning reasoning paths over semantic graphs for videogrounded dialogues. In International Conference on Learning Representations. Hung Le and Steven C.H. Hoi. 2020. Video-grounded dialogues with pretrained generation language models. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 5842–5848, Online. Association for Computational Linguistics. Hung Le, Doyen Sahoo, Nancy Chen, and Steven Hoi. 2019. Multimodal transformer networks for end-toend video-grounded dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5612–5623, Florence, Italy. Association for Computational Linguistics. Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. 2021. Less is more: Clipbert for video-and-language learning via sparse sampling. In *CVPR*, pages 7331–7341. Dongxu Li, Junnan Li, and Steven C. H. Hoi. 2023a. Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing. Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, and Steven C.H. Hoi. 2022a. Align and prompt: Video-and-language pre-training with entity prompts. In *CVPR*. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. *arXiv preprint arXiv:2301.12597*. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022b. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. *arXiv preprint arXiv:2201.12086*. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. 2021a. Align before fuse: Vision and language representation learning with momentum distillation. In *NeurIPS*. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, abs/1908.03557. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In *ECCV*, pages 121–137. Yehao Li, Yingwei Pan, Jingwen Chen, Ting Yao, and Tao Mei. 2021b. X-modaler: A versatile and highperformance codebase for cross-modal analytics. In Proceedings of the 29th ACM International Conference on Multimedia, pages 3799–3802. Zekang Li, Zongjia Li, Jinchao Zhang, Yang Feng, and Jie Zhou. 2021c. Bridging text and video: A universal multimodal transformer for audio-visual sceneaware dialog. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 29:2476–2483. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In *ECCV*, volume 8693, pages 740–755. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, pages 13–23. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204. Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In *NIPS*, pages 1143–1151. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In *ICCV*, pages 2641–2649. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clipfiltered 400 million image-text pairs. *arXiv preprint* arXiv:2111.02114. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In *ICCV*, pages 618–626. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, pages 2556–2565. Violetta Shevchenko, Damien Teney, Anthony Dick, and Anton van den Hengel. 2021. Reasoning over vision and language: Exploring the benefits of supplemental knowledge. *arXiv preprint arXiv:2101.06013*. Amanpreet Singh, Vedanuj Goswami, Vivek Natarajan, Yu Jiang, Xinlei Chen, Meet Shah, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. 2020. Mmf: A multimodal framework for vision and language research. https://github.com/facebookresearch/mmf. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre-training of generic visual-linguistic representations. In *ICLR*. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In ACL, pages 6418–6428. Anthony Meng Huat Tiong, Junnan Li, Boyang Li, Silvio Savarese, and Steven CH Hoi. 2022. Plug-andplay vqa: Zero-shot vqa by conjoining large pretrained models with zero training. arXiv preprint arXiv:2210.08773. Denny Vrandeciˇ c and Markus Krötzsch. 2014. Wiki- ´ data: a free collaborative knowledgebase. *Communications of the ACM*, 57(10):78–85. Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. 2017. Video question answering via gradually refined attention over appearance and motion. In Proceedings of the ACM international conference on Multimedia, pages 1645–1653. Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. 2021. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In *EMNLP*, pages 6787–6800. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 5288–5296. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. Vinvl: Making visual representations matter in vision-language models. *arXiv* preprint arXiv:2101.00529. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and VQA. In *AAAI*, pages 13041–13049. Linchao Zhu and Yi Yang. 2020. Actbert: Learning global-local video-text representations. In *CVPR*, pages 8746–8755. ## A Appendix A.1 Details Of The Supported Models - ALBEF is an image-text model. It employs a ViT (Dosovitskiy et al., 2021) as the image encoder, early BERT (Devlin et al., 2019) layers as the text encoder, and re-purposes late BERT layers as the multimodal encoder by adding cross-attentions. It proposes the novel image-text contrastive (ITC) loss to align unimodal features before fusing them using the multimodal encoder. It is also one of the first few models requiring no region information while demonstrating strong multimodal understanding capability. - BLIP primarily tackles image-text tasks, while also showing strong zero-shot transfer capabilities to video-text tasks. It employs a ViT as the image encoder and a BERT as the text encoder. To facilitate multimodal understanding and generation, BLIP proposes mixture of encoder-decoder (MED), which repurposes BERT into multimodal encoder and decoder with careful weight sharing. Moreover, BLIP proposes dataset bootstrapping to improve the quality of texts in the pre-training corpus by removing noisy ones and generating new diverse ones. In addition to the improved understanding capability compared to ALBEF, BLIP highlights its strong text generation ability, producing accurate and descriptive image captions. When adapted to video-text tasks, it operates on sampled frames while concatenating their features to represent the video. - BLIP2 represents a generic and efficient language-vision pre-training strategy that leverages available frozen image encoders and large language models (LLMs). The model introduces a two-staged training strategy to bridge the modality gap with a lightweight module, called Querying Transformer (QFormer). In addition to the strong performance on existing tasks, including VQA, multimodal retrieval, captioning, BLIP-2 also unlocks the novel capabilities of zero-shot image-to-text generation following natural language instructions. - CLIP is a family of powerful image-text models. Different from ALBEF and BLIP, CLIP models adopt two unimodal encoders to obtain image and text representations. CLIP maximizes the similarity between positive imagetext pairs, and was trained on 400M imagetext pairs, rendering strong and robust unimodal representations. CLIP variants employ different visual backbones, including ResNet50 (He et al., 2016), ViT-B/16, ViT-B/32, ViTL/14, ViT-L/14-336. We integrate a thirdparty implementation of CLIP (Ilharco et al., 2021) into LAVIS while including the official pre-trained weights. - ALPRO is a video-text model, tackling videotext retrieval and video question answering tasks. It uses TimeSformer (Bertasius et al., 2021) to extract video features, and BERT to extract text features. Similar to ALBEF, ALPRO uses contrastive loss to align unimodal features, yet it opts to use self-attention to model multimodal interaction. This architecture choice enables an additional visualgrounded pre-training task, i.e. prompt entity modeling (PEM) to align fine-grained videotext information. ALPRO is strong in extracting regional video features and remains competitive for video understanding tasks across various datasets. ## A.2 Additional Benchmarking Results In Table 5 and 6, we show benchmarking results with BLIP and ALPRO reimplmentations in LAVIS. As shown in the tables, the results are consistent with those in the original implementation. In Table 7, we present results by adapting models in LAVIS to new tasks and datasets, on which the models were not previously reported on. In this way, we show that our library helps to easily adapt to new tasks and datasets, while achieving competitive performance. Knowledge-based VQA (KVQA). The task of KVQA aims to measure the commonsense knowledge learnt by language-vision models, where models are asked to answer questions involving external knowledge. To this end, state-of-the-art models (Gui et al., 2021; Kamath et al., 2022) resort to external knowledge base(Vrandeciˇ c and ´ Krötzsch, 2014) or large language models(Brown et al., 2020). In our experiments, we show that language-vision pre-trained models finetuned on VQAv2(Goyal et al., 2017) show strong transfer results to KVQA datasets. With additional finetuning on KVQA datasets, further improvements are ob- Tasks Datasets Impl. Results Retrieval R1 R5 R10 | 82.0 | 95.8 | 98.1 | |--------|--------|--------| | 64.5 | 86.0 | 91.7 | | 96.9 | 99.9 | 100.0 | | 87.5 | 97.6 | 98.9 | | dev | std | | | 78.23 | 78.29 | | | B@4 | CIDEr | SPICE | | 39.7 | 133.5 | 23.7 | | 31.9 | 109.1 | 14.7 | | val | test | | | 82.48 | 83.25 | | VQA dev std VQAv2 78.25 78.32 78.23 78.29 Multimodal Classification val test NLVR2 82.15 82.24 82.48 83.25 TR COCO 82.4 95.4 97.9 82.0 95.8 98.1 IR COCO 65.1 86.3 91.8 64.5 86.0 91.7 TR Flickr30k 97.2 99.9 100.0 96.9 99.9 100.0 IR Flickr30k 87.5 97.7 98.9 87.5 97.6 98.9 COCO 39.7 133.3 - 39.7 133.5 23.7 NoCaps - 109.6 14.7 31.9 109.1 14.7 served on both OK-VQA and AOK-VQA datasets. As a result, our best model BLIP surpasses previous state-of-the-art by a clear margin. Video Dialogue. The task of video-grounded dialogues requires models to generate a natural response given a dialogue context and a grounding video (Alamri et al., 2019). Existing models have exploited new architectural designs (Le et al., 2019), additional learning tasks (Le et al., 2022, 2021), and pretraining (Le and Hoi, 2020; Li et al., 2021c) to improve the model abilities to understand multimodal context and generate natural language. In our experiments, we show that our library can be easily integrated with any vision-language models (such as VGD-GPT (Le and Hoi, 2020)) to adapt to this dialogue task. The results in Table 7 show that our model implementation with LAVIS can lead to impressive performance, comparable to current state-of-the-art approaches. ## A.3 Supplementary Video And Online Demo: The supplementary video can be found:**https://** youtu.be/0CuRowHu7TA. In the following, we provide additional benchmarking results using models in LAVIS. Alternatively, the video can be downloaded from: **https://drive.google.com/file/d/** Table 6: Comparison between official and replicated task performance using ALPRO. TR denotes video-totext retrieval; VR denotes text-to-video retrieval. Table 7: Experiment results on KVQA compared with best existing methods. Due to the submission number limits, only BLIP AOKVQA result on the test split is reported. 1cFTEgL53WI-oSFbWR_6k6eg2iqAi9bwe/view? usp=sharing A public demo of LAVIS can be found at the temporary address: **http://34.123.225.190:8080/** | Tasks | Datasets | Impl. | Results | | |-----------|------------|---------|-----------|------| | Retrieval | R1 | R5 | R10 | | | TR | MSRVTT | 32.0 | 60.7 | 70.8 | | 33.2 | 60.5 | 71.7 | | | | VR | MSRVTT | 33.9 | 60.7 | 73.2 | | 33.8 | 61.4 | 72.7 | | | | TR | DiDeMo | 37.9 | 67.1 | 77.9 | | 38.8 | 66.4 | 76.8 | | | | VR | DiDeMo | 35.9 | 67.5 | 78.8 | | 36.6 | 67.5 | 77.9 | | | | test | | | | | | MSRVTT | 42.1 42.1 | | | | | MSVD | 45.9 46.0 | | | | | VideoQA | | | | | | VQA Image | |--------------------------------------| | Captioning Multimodal Classification | | Tasks | Datasets | Models | Results | | |----------------------------------|--------------------------------|------------------------|-----------|-------| | test | | | | | | OKVQA | KAT (Single)(Gui et al., 2021) | 53.1 | | | | KAT (Ensemble)(Gui et al., 2021) | 54.4 | | | | | ALBEF | 54.7 | | | | | BLIP | 55.4 val | test | | | | AOKVQA | GPV-2(Kamath et al., 2022) | 48.6 | 40.7 | | | ALBEF | 54.5 | - | | | | BLIP (VQAv2) | 53.4 | - | | | | BLIP | 56.2 | 50.1 | | | | KVQA | B@4 | CIDEr | | | | MTN (Le et al., 2019) | 0.410 | 1.129 | | | | Video | PDC (Le et al., 2021) | 0.429 | 1.194 | | | Dialogue | AVSD | RLM (Li et al., 2021c) | 0.459 | 1.308 | | VGD-GPT | 0.465 | 1.315 | | |
kwon-mihindukulasooriya-2023-finspector
Finspector: A Human-Centered Visual Inspection Tool for Exploring and Comparing Biases among Foundation Models
https://aclanthology.org/2023.acl-demo.4
Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotypes. To address this issue, we propose Finspector, a human-centered visual inspection tool designed to detect biases in different categories through log-likelihood scores generated by language models. The goal of the tool is to enable researchers to easily identify potential biases using visual analytics, ultimately contributing to a fairer and more just deployment of these models in both academic and industrial settings. Finspector is available at \url{https://github.com/IBM/finspector}.
# Finspector: A Human-Centered Visual Inspection Tool For Exploring And Comparing Biases Among Foundation Models Nandana Mihindukulasooriya IBM Research Dublin, Ireland nandana@ibm.com ## Abstract Pre-trained transformer-based language models are becoming increasingly popular due to their exceptional performance on various benchmarks. However, concerns persist regarding the presence of hidden biases within these models, which can lead to discriminatory outcomes and reinforce harmful stereotypes. To address this issue, we propose Finspector, a human-centered visual inspection tool designed to detect biases in different categories through log-likelihood scores generated by language models. The goal of the tool is to enable researchers to easily identify potential biases using visual analytics, ultimately contributing to a fairer and more just deployment of these models in both academic and industrial settings. Finspector is available at https: //github.com/IBM/finspector. ## 1 Introduction Recently, pre-trained large language models (LLMs), including 'foundation models,' that are trained on large amounts of data have shown striking performances in a variety of natural language processing (NLP) tasks such as language translation, text classification, and summarization. Such models can also be fine-tuned and adapted to analyze and understand text generated in specific fields, such as law and medicine. Despite their usefulness, there is a growing concern that the foundation models inherently reflect human biases, which might Bum Chul Kwon ![0_image_0.png](0_image_0.png) IBM Research Cambridge, MA, United States bumchul.kwon@us.ibm.com 42 have originated from their large training corpora (Shah et al., 2020; Liang et al., 2021; Weidinger et al., 2021; Garrido-Muñoz et al., 2021). These social biases include stereotyping and negative generalizations of different social groups and communities, which could have been present in their training corpora (Liang et al., 2021; GarridoMuñoz et al., 2021). A cognitive bias, stereotyping, is defined as the assumption of some characteristics are applied to communities on the basis of their nationality, ethnicity, gender, religion, etc (Schneider, 2005). Relatedly, Fairness ("zero-bias"), in the context of NLP and machine learning is defined as being not discriminatory according to such characteristics (Garrido-Muñoz et al., 2021). Given this context, there is a significant demand for methodologies and tools aimed at inspecting, detecting, and mitigating bias within AI models, particularly large-scale language models (Sun et al., 2019). A previous work (Kwon and Mihindukulasooriya, 2022) demonstrated that computing the pseudo-log-likelihood scores of paraphrased sentences using different foundation models can be used to test the consistency and robustness of the models, which can lead to a better understanding of the fairness of LLMs. Pseudo-log-likelihood Masked Language Models (MLM) scoring or log probability of auto-regressive language models can be used to measure how likely a language model is to produce a given sentence (Salazar et al., 2020). It can also be used to measure the likelihood of multiple variants of a sentence, such as stereotypical and non-stereotypical ones, in order to determine which one the model prefers or predicts as more likely. Consequently, this measure can be used to show whether a model consistently prefers stereotypical sentences over non-stereotypical ones. We believe that experts in respective fields need to inspect the fairness and biases through a systematic, human-in-the-loop approach, including the lens of log-likelihood scores, before adapting them for any downstream tasks. Such humancentered data analysis approaches can help users to assess foundation models' inner workings. Furthermore, interactive data visualization techniques can help users to form and test their hypotheses about underlying models and effectively communicate the results of these models to a wider audience, enabling better collaboration and understanding among stakeholders. Many techniques were developed and applied to inspect the fairness of different machine learning models, as discussed in Section 2. In this work, we propose a visual analytics application called Finspector, a short name for foundation model inspector. Finspector is designed to help users to test the robustness of foundation models and identify biases of various foundation models using interactive visualizations. The system is built as a Python package so that it can be used in the Jupyter environment, which is familiar to our target users–data scientists. The tool consists of multiple, coordinated visualizations, each of which supports a variety of analytic tasks. With foundation models available from repositories such as Hugging Face, users can use Finspector to generate and visually compare the log probability scores on user-provided sentences. In this paper, we introduce the design of Finspector and present a case study of how the tool can be used to inspect the fairness of large language models. ## 2 Background Bias in NLP including large language models has been studied extensively. Garrido-Muñoz et al. provide a survey (Garrido-Muñoz et al., 2021) of existing work on the topic. Benchmarks for detecting bias in models is a key element of this research; StereoSet (Nadeem et al., 2021), CrowSPairs (Nangia et al., 2020), WinoGender (Rudinger et al., 2018), WinoBias (Zhao et al., 2018) are examples of such benchmarks. Tenny et al. presented Language Interpretability Tool (LIT) (Tenney et al., 2020) as a visualization tool for understanding NLP models which includes analyzing gender bias among others. There are several other visualization tools that are focused on analyzing different aspects of transformer-based LLMs such as attention or hidden states such as T 3-Vis (Li et al., 2021), InterperT (Lal et al., 2021), exBERT (Hoover et al., 2020), AllenNLP Interpret (Wallace et al., 2019), SANVis (Park et al., 2019), and BertViz (Vig, 2019). Similarly, BiaScope (Rissaki et al., 2022), is a visualization tool for unfairness diagnosis in graph embeddings by comparing models. The visualizations in these tools are mainly focused on understanding how the attention mechanism works and the impact of different tokens in the input to the model output. There are several other visualization tools that help users investigate the fairness of machine learning models, primarily focusing on aspects such as prediction discrepancy among different subgroups, group fairness, individual fairness, and counterfactual fairness. These include tools such as WhatIf Tool (Wexler et al., 2019), FairVis (Cabrera et al., 2019), Fairsight (Ahn and Lin, 2019), RMExplorer (Kwon et al., 2022a), DASH (Kwon et al., 2022b), ConceptExplainer (Huang et al., 2023) and Silva (Yan et al., 2020). Despite their usefulness, they are mainly designed to explore the fairness of predictive models (e.g., image classification), not for pre-trained foundation models. In contrast to these tools above, Finspector aims to inspect the fairness and bias of foundational models by exploring the log-likelihood scores generated by the models. Such scores and their difference are presented with interactive visualizations. ## 3 Design Of Finspector In this section, we describe the design of Finspector. There are three main views of Finspector, 1) Distribution of Log Likelihoods, 2) Table of Sentences, and 3) Sentence Embeddings, and a customization panel on top to set highlights or split distributions by selected categorical variables. Readers can access the code of Finspector at https: //github.com/IBM/finspector. The system requires users to provide three items: 1) text data with paired samples and bias category labels; 2) pre-trained foundation models; 3) 2d sentence embeddings. By default, the system expects text data with labels indicating paired samples (e.g., sample id) and bias categories, similar to the CrowS-Pairs dataset (Nangia et al., 2020). Without bias categories provided, users can still use Finspector but without options to color-code or sliceand-dice the samples by the variables. Any other metadata associated with each sentence can be viewed in the table view. In the current implementation, the system accepts any models trained in selfsupervised, masked language modeling approaches using Pytorch. For instance, users can download models like BERT, ALBERT, and RoBERTa from Hugging Face and use them to run Finspector. Users can optionally provide the 2d representation vectors of sentences. Users can freely choose any dimensionality reduction method to derive meaningful representations that can be visualized for explorative analysis. ## 3.1 Distribution Of Log-Likelihoods This view shows the distribution of aggregated conditional pseudo-log-likelihood scores of the set of input sentences as shown in Figure 1 (B). Following the same approach as previous studies (Kwon and Mihindukulasooriya, 2022; Nangia et al., 2020; Salazar et al., 2020), for each sentence, we calculated the score by iteratively masking one token at a time and taking their mean value. As Figure 1 (B) shows, the view initially shows parallel horizontal axes of foundation models and provides a density chart over each axis, which represents the distribution of log-likelihoods computed by the corresponding model on given text data. It also shows a median and interquartile plot below each density plot. Since log-likelihood scores of the same sentences were computed by different models, the view can turn into parallel coordinates to show the differences in scores. Once users specify a range of log-likelihood scores by setting a filter on an axis of a foundation model, the view shows only the sentences that satisfy the condition, as Figure 2 (B) shows. Furthermore, it shows lines across axes, where each line representing a sentence is displayed as a series of connected points along the axes, representing foundation models. Users can use the view to explore the distributions of subgroups defined by users. First, users can set multiple filters along the corresponding axes to only show sentences that meet the user-defined requirements. Figure 2 (B) shows that a few sentences that fall within the narrow score ranges set on the two axes of BERT and RoBERTa exhibit a significantly wider distribution on the other axis, ALBERT. Second, users can summarize the distribution of sentences by categories. Once users select a bias category in the predefine checkboxes of bias categories, as Figure 2 (A) shows, the view shows parallel bands (Kwon et al., 2018) that summarize parallel coordinates using median and interquartile plots along each axis for selected points. Finally, users can also type a new sentence in the text box, thereby creating a new data point for test data, the system feeds it to given foundation models, and then the view shows the distribution of the pseudolog-likelihood scores of the new sentence as a red polyline across the axes, as Figure 2 (D) shows. ## 3.2 Table Of Sentences The table view shows the details of the input sentence data as Figure 1 (C) shows. Users can decide which columns to show by including the field names as a list when calling the Finspector function. As mentioned earlier, the view is tightly connected Figure 2: Description of how Finspector can be used to explore models: (A) the highlight feature highlights ![3_image_0.png](3_image_0.png) corresponding sentences in colors corresponding to bias categories, and the split feature shows box plots of stereotype and non-stereotype sentences; (B) users can set filters on foundation model axes on the distribution view; (C) users select sentences in the sentence embedding view with a lasso selection; (D) users type their own sentences to see their log probabilities inferred by the given foundation models. to other views via interactivity. For one, when users hover their mouse cursor over a single row, the corresponding line appears in the distribution of log-likelihoods and the sentence embedding view. In a reverse manner, when filters are set or removed in the distribution view or the sentence embedding view, the table view also shows only the sentences that satisfy the conditions. When users select a category in the panel, the table highlights the corresponding rows in a respective color as Figure 2 (A) shows. Using the table view, users can read the sentence selected from other views and check the log probability scores of it. ## 3.3 Sentence Embeddings Sentence embeddings show similarities and differences among the input sentences using a scatterplot as shown in Figure 1 (D). Users can choose any dimensionality reduction algorithm (e.g., t-SNE, UMAP, PCA) and any features of sentences to generate embeddings for sentences. Once they plug in the 2-dimensional vectors of sentences, the view can generate a scatter plot. They can interpret the groups of sentences in proximity based on the input data and the algorithm used. Users can also hover over individual sentences which makes the two other views highlight the hovered item. The sentence embeddings view also shows selected sentences when filters are set in the distribution view or a row was hovered in the table view. Users can also lasso-select multiple sentences in proximity so that they are filtered and highlighted in the distribution view and the table view, respectively, as Figure 2 (C) shows. ## 4 Use Cases: Inspecting Foundation Models Using The Paraphrased Crows-Pairs Dataset In this section, we demonstrate how Finspector can be used to discover informative insights about foundation models and datasets. Note that the insights reported here are preliminary hypotheses so should not be taken as proven facts. This section aims to describe how interactive visualizations of Finspector help users to explore the fairness of large language models. In this use case, we used the CrowS-Pairs dataset for the analysis (Nangia et al., 2020). To increase Figure 3: The use case shows insights that can be discovered using Finspector: (A) when a filter is applied to the ![4_image_0.png](4_image_0.png) range of -5 to -4 on ALBERT, the parallel coordinates show differences in the distribution of log probabilities; (B) the box plot shows the differences in score between stereotype and non-stereotype sentences by models; (C) & (D) with small changes to the gender, the user-generated sentence results in different log probabilities. the sample size, we generated 10 paraphrased sentences per each given sentence using Pegasus (Zhang et al., 2019) and ensured that there are no duplicates. Then, we generated the pseudo-loglikelihood measure using each of the three foundation models, BERT, ALBERT, and RoBERTa, per sentence. Then, for the sentence embedding view, we used a dimensionality reduction algorithm, t-SNE (Van der Maaten and Hinton, 2008), to project the log-likelihood score vector into two dimensions. As Figure 1 shows, we plugged in the model checkpoints downloaded from Hugging Face, the paraphrased CrowS-Pairs dataset, and the 2d representation vectors generated from t-SNE to launch Finspector. Initially, the three Foundation models show similar distributions of log probabilities, but the score of ALBERT is slightly lower than those of the two other models as shown in Figure 1 (B). In particular, a small bump between -4 and -5 indicates there are a considerable amount of sentences that deem unlikely from ALBERT. Overall, the sentence embedding view shows that there are several groups that share the distribution across three models. We set a filter by drawing a rectangular box between the values of -5 and -4 on ALBERT. Figure 3 (A) shows parallel coordinates of sentences satisfying the requirement. Notably, the sentences that had lower than -4 in log probabilities on ALBERT tended to have higher values around -3.5 and -2.5 on BERT or RoBERTa. The sentences embedding view shows that the selected sentences are clustered at the bottom part. We decided to select the bottom region by hovering sentences near the cluster and found sentences that show similar patterns with wider differences in scores. In the case of one sentence, "I know the whites are correct, I said warmly.", scores were -3.3, -3.4, and -.6.6, for BERT, RoBERTa, and ALBERT, respectively. We decided to explore differences in the log probabilities among bias categories using the split and highlight features of Finspector. The distribution view shows differences in the score of ALBERT between a pair of base and stereotype sentences of the disability category, as Figure 3 (B) shows. The stereotypical sentences appear more likely than the base sentences. For instance, the log probability of "A person with mental illness laughed at a bird" by ALBERT is -2.4. On the other hand, "A person laughed at a bird" by ALBERT is -3.8. However, the other models did not show similar trends: -2.7 (BERT), -2.3 (RoBERTa) for the stereotypical sentence, and -2.4 (BERT), -2.2 (RoBERTa) for the base sentence. We switched the gear to see general differences among categories. The parallel band view shows that the log probabilities of sentences in the age category tend to be higher than those in the race-color category. Lastly, we decided to generate some customized sentences. We wrote a gender-stereotype sentence, "Housekeeping and childcare are women's responsibility.", and its counterpart, "Housekeeping and childcare are men's responsibility." The log probability score by RoBERTa shows some differences as the score of the stereotypical sentence is -1.5, which is higher than that of the counterpart, -2.0. We also wrote another gender-stereotype sentence, "Jane is the most probable candidate for the CEO position because of her leadership, credibility, and humility.", and compared it to its counterpart, "John is the most probable candidate for the CEO position because of his leadership, credibility, and humility." As Figure 3 (C) & (D) show, the log probabilities of the sentence including the gender stereotype appear to show higher log probabilities than the counterpart, across all three foundation models. ## 5 Discussion In this paper, we introduce an interactive visualization system called Finspector, which helps users to explore pre-trained large language models by evaluating the pseudo-log-likelihood measures computed on text data. Adopting the workflow from our previous approach (Kwon and Mihindukulasooriya, 2022; Nangia et al., 2020), the system allows users to inspect biases and fairnesses of given models applied to sentences that manifest signs of stereotypes. Finspector is developed for interactive computing environments like Jupyter to help users constantly evaluate models while improving their effectiveness and fairness before deploying them for practice. This paper describes the views and features of Finspector to accomplish the goals, which can be useful for future researchers and designers to develop similar systems in the future. Our work of a human-centered approach for fairness inspection of LLMs opens new research avenues for interdisciplinary research between AI, Visualization, and other fields. One future research area is to build interactive visualization systems that help users evaluate the impact of biases in foundation models on various downstream tasks. Numerous large language models undergo fine-tuning or prompt-tuning processes, such as text classification, entity recognition, and language translation. Latent fairness and bias issues in language models can propagate through the pipeline so that fine-tuning or prompt-tuning the foundation models may generate undesirable outcomes. Therefore, researchers need to examine the relationship between bias and fairness in base models and the performance outcomes of fine-tuning or prompttuning these models on specific tasks. Interactive visualizations can be developed for researchers to conduct systematic evaluations of the associations between bias and performance. Another future work can investigate the robustness of pseudo-log-likelihood scoring as a bias measure for foundation models adapted to various tasks. We consistently discover some cases where foundation models generate some problematic issues in sentences that contain stereotypical characteristics with one category (e.g., black) versus another (e.g., white). One key area to measure the robustness is to identify new ways to improve the robustness of loglikelihood scoring as a bias measure for foundation models. It is also important to collect a benchmark dataset containing the stereotype sentence pairs in a systematic manner. Ultimately, such investigation will help us develop an evaluation metric that can be widely used before fine-tuning and deploying it for downstream tasks. In this work, we focused on language models pre-trained using masked-language modeling objectives, *i.e.,* mainly encoder-only models such as BERT, RoBERTa, and ALBERT, which can be used to generate conditional pseudo-log-likelihood measures. There are two other families of language models. First, decoder-only autoregressive models, such as GPT, are pre-trained by predicting the subsequent word in a sequence based on the preceding words or employing the next-sentence-prediction approach (Radford et al., 2018). Second, there are encoder-decoder or sequence-to-sequence models such as BART (Lewis et al., 2020) or T5 (Raffel et al., 2020). Finspector is generalizable to these different types of architectures given that a metric can be formulated to measure the likelihood of a given sentence in the language model. For instance, for GPT-like language models, (Salazar et al., 2020) use log probability score. In the future, we plan to incorporate foundation models from other families, including decoder-only and encoder-decoder models, into Finspector. To inspect such models in the current Finspector framework, users need to develop ways to generate a log-likelihood-equivalent measure per sentence or we can adapt the visualization framework to fit the next-sentence-prediction models and evaluate their biases in different ways. As part of our future research, we plan to investigate various visual analytics approaches for inspecting the fairness and biases in models pre-trained using various modeling objectives and architecture. ## 6 Impact Statement Our tool is designed to help users evaluate the fairness and biases of foundation models or large language models. Such a tool can help researchers and practitioners visually investigate biases in large language models for further discussion and remedy. Presentation of Finspector can facilitate discussion of human-centered approaches to detecting and resolving fairness issues in various large language models. However, readers should also note that there is no guarantee to discover all biases or fairness issues by using the tool. We hope that the design of the tool described in the paper can inspire future technologies that can help evaluate the bias and fairness of foundation models. ## Acknowledgements We express our gratitude to Rebekah and Miriam for initiating the discussions among the co-authors, which ultimately paved the way for this collaborative project. ## References Yongsu Ahn and Yu-Ru Lin. 2019. Fairsight: Visual analytics for fairness in decision making. *IEEE Transactions on Visualization and Computer Graphics*, 26(1):1086–1095. Ángel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, and Duen Horng Chau. 2019. FairVis: Visual analytics for discovering intersectional bias in machine learning. In *IEEE Conference on Visual Analytics Science* and Technology (VAST), pages 46–56. IEEE. Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martínez-Santiago, and L Alfonso UreñaLópez. 2021. A survey on bias in deep nlp. *Applied* Sciences, 11(7):3184. Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 187–196, Online. Association for Computational Linguistics. Jinbin Huang, Aditi Mishra, Bum Chul Kwon, and Chris Bryan. 2023. ConceptExplainer: Interactive explanation for deep neural networks from a concept perspective. *IEEE Transactions on Visualization and* Computer Graphics, 29(1):831–841. Bum Chul Kwon, Ben Eysenbach, Janu Verma, Kenney Ng, Christopher De Filippi, Walter F Stewart, and Adam Perer. 2018. Clustervision: Visual supervision of unsupervised clustering. *IEEE Transactions on Visualization and Computer Graphics*, 24(1):142–151. Bum Chul Kwon, Uri Kartoun, Shaan Khurshid, Mikhail Yurochkin, Subha Maity, Deanna G Brockman, Amit V Khera, Patrick T Ellinor, Steven A Lubitz, and Kenney Ng. 2022a. RMExplorer: A visual analytics approach to explore the performance and the fairness of disease risk models on population subgroups. In *2022 IEEE Visualization and Visual* Analytics (VIS), pages 50–54. Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, and Jaegul Choo. 2022b. DASH: Visual Analytics for Debiasing Image Classification via User-Driven Synthetic Data Augmentation. In *EuroVis 2022 - Short Papers*. The Eurographics Association. Bum Chul Kwon and Nandana Mihindukulasooriya. 2022. An empirical study on pseudo-log-likelihood bias measures for masked language models using paraphrased sentences. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 74–79. Vasudev Lal, Arden Ma, Estelle Aflalo, Phillip Howard, Ana Simoes, Daniel Korat, Oren Pereg, Gadi Singer, and Moshe Wasserblat. 2021. InterpreT: An interactive visualization tool for interpreting transformers. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 135–142, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, and Giuseppe Carenini. 2021. T3-vis: visual analytic for training and fine-tuning transformers in NLP. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 220–230, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 6565–6576. PMLR. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371, Online. Association for Computational Linguistics. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. Cheonbok Park, Inyoup Na, Yongjang Jo, Sungbok Shin, Jaehyo Yoo, Bum Chul Kwon, Jian Zhao, Hyungjong Noh, Yeonsoo Lee, and Jaegul Choo. 2019. SANVis: Visual analytics for understanding self-attention networks. In *IEEE Visualization Conference (VIS)*, pages 146–150. IEEE. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(1):5485–5551. Agapi Rissaki, Bruno Scarone, David Liu, Aditeya Pandey, Brennan Klein, Tina Eliassi-Rad, and Michelle A Borkin. 2022. BiaScope: Visual unfairness diagnosis for graph embeddings. In *IEEE* Visualization in Data Science (VDS), pages 27–36. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In *Proceedings of the 2018* Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8–14, New Orleans, Louisiana. Association for Computational Linguistics. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Katrin Kirchhoff. 2020. Masked language model scoring. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pages 2699–2712, Online. Association for Computational Linguistics. David J Schneider. 2005. *The psychology of stereotyping*. Guilford Press. Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 5248–5264, Online. Association for Computational Linguistics. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In *Proceedings of the* 57th Annual Meeting of the Association for Computational Linguistics, pages 1630–1640, Florence, Italy. Association for Computational Linguistics. Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. 2020. The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models. *EMNLP 2020 Demos*, 15. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42, Florence, Italy. Association for Computational Linguistics. Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, and Sameer Singh. 2019. AllenNLP interpret: A framework for explaining predictions of NLP models. In *Proceedings of the* 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 7–12, Hong Kong, China. Association for Computational Linguistics. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. *arXiv preprint arXiv:2112.04359*. James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE Transactions on Visualization and Computer Graphics, 26(1):56–65. Jing Nathan Yan, Ziwei Gu, Hubert Lin, and Jeffrey M Rzeszotarski. 2020. Silva: Interactively assessing machine learning fairness using causality. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems, pages 1–13. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In *Proceedings of the 2018 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
sil-etal-2023-primeqa
{P}rime{QA}: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
https://aclanthology.org/2023.acl-demo.5
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers. In this paper, we introduce PrimeQA: a one-stop and open-source QA repository with an aim to democratize QA research and facilitate easy replication of state-of-the-art (SOTA) QA methods. PrimeQA supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation. It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on public benchmarks, and expanding pre-existing methods. PrimeQA is available at: \url{https://github.com/primeqa}.
# Prime**Qa: The Prime Repository For State-Of-The-Art Multilingual** Question Answering Research And Development Avirup Sil∗ , Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Juergen Bross, Radu Florian, Salim Roukos IBM Research AI ## Abstract The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers. In this paper, we introduce PRIMEQA: a one-stop and open-source QA repository with an aim to democratize QA research and facilitate easy replication of stateof-the-art (SOTA) QA methods. PRIMEQA supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation. It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on public benchmarks, and expanding pre-existing methods. PRIMEQA is available at: https: //github.com/primeqa. ## 1 Introduction Question Answering (QA) is a major area of investigation in Natural Language Processing (NLP), consisting primarily of two subtasks: information retrieval (IR) (Manning, 2008; Schütze et al., 2008) and machine reading comprehension (MRC) (Rajpurkar et al., 2016, 2018; Kwiatkowski et al., 2019a; Chakravarti et al., 2020). IR and MRC systems, also referred to as *retrievers* and *readers*, respectively, are commonly assembled in an end-to-end open-retrieval QA pipeline (OpenQA henceforth), which accepts a query and a large document collection as its input and provides an answer as output (Chen et al., 2017; Lee et al., 2019; Karpukhin et al., 2020; Santhanam et al., 2022b). The retriever first identifies documents or passages (i.e., contexts) that contain information relevant to the query, from which the reader then extracts a precise answer. Alternatively, the reader can also ∗Corresponding author: avi@us.ibm.com be generative and leverage large language models (LLMs) (Ouyang et al., 2022; Chung et al., 2022). Despite rapid progress in QA research, software to perform and replicate QA experiments have mostly been written in silos. At the time of this writing, there is no central repository that facilitates the training, analysis and augmentation of state-of-the-art (SOTA) models for different QA tasks at scale. In view of the above, and with an aim to democratize QA research by providing easy replicability, here we present PRIMEQA: an open-source repository1 designed as an end-to-end toolkit. It offers all the necessary tools to easily and quickly create a custom QA application. We provide a main repository that contains easy-to-use scripts for retrieval, machine reading comprehension, and question generation with the ability to perform training, inference, and performance evaluation. Additionally, several sibling repositories offer features for easily connecting various retrievers and readers, as well as for creating a front-end user interface (UI). PRIMEQA has been designed as a platform for QA development and research, and encourages collaboration from all members of the QA community—from beginners to experts. PRIMEQA has a growing developer base with contributions from major academic institutions. The following is a summary of our contributions: - We present PRIMEQA, a first-of-its-kind repository for comprehensive QA research. It is free to use, well-documented, easy to contribute to, and license-friendly (Apache 2.0) for both academic and commercial usage. - PRIMEQA contains *easy-to-use* implementations of SOTA retrievers and readers that are at the top of major QA leaderboards, with capabilities to perform training, inference and performance evaluation of these models. - PRIMEQA provides a mechanism via accompanying repositories to create custom 1https://github.com/primeqa OpenQA applications for industrial deployment, including a front-end UI. - PRIMEQA models are built on top of Transformers (Wolf et al., 2020) and are available on the Hugging Face Model Hub.2 - PRIMEQA has readers that can leverage SOTA LLMs such as InstructGPT (Ouyang et al., 2022) via external APIs. ## 2 Related Work One of the largest community efforts for NLP software is Papers with Code (Robert and Thomas, 2022). Their mission is to create a free and open resource for NLP papers, code, datasets, methods and evaluation tables catering to the wider NLP and Machine Learning community and not just QA. Even though the QA section includes over 1800 papers with their code, the underlying software components are written in various versions of both PyTorch and TensorFlow with no central control whatsoever and they do not communicate with each other. These disjoint QA resources hinder replicability and effective collaboration, and ultimately lead to quick "sunsetting" of new capabilities. Recently, Transformers (Wolf et al., 2020) has become one of the most popular repositories among NLP users. However, while being widely adopted by the community, it lacks a distinct focus on QA. Unlike PRIMEQA, it only supports one general script for extractive QA and several stand-alone Python scripts for retrievers. Similarly FairSeq (Ott et al., 2019) and AllenNLP (Gardner et al., 2018) also focus on a wide array of generic NLP tasks and hence do not solely present a QA repository. They do not support plug-and-play components for users custom search applications. Several toolkits exist that cater to building customer-specific search applications (NVDIA, 2022; Deepset, 2021) or search-based virtual assistants (IBM, 2020). However, while they have a good foundation for software deployment, unlike PRIMEQA, they lack the focus on replicating (and extending) the latest SOTA in QA research on public benchmarks which is an essential component needed to make rapid progress in the field. ## 3 P**Rime**Qa PRIMEQA is a comprehensive open-source resource for cutting-edge QA research and development, governed by the following design principles: 2https://huggingface.co/PrimeQA Core Models Extensions Retriever BM25 (Robertson and Zaragoza, 2009) Dr.DECR * (Li et al., 2022) DPR (Karpukhin et al., 2020) ColBERT (Santhanam et al., 2022b) Reader General MRC* (Alberti et al., 2019b) ReasonBERT (2021) FiD (Izacard and Grave, 2020) OmniTab (Jiang et al., 2022a) Boolean* (McCarley et al., 2023) MITQA* (Kumar et al., 2021) Lists Tapas(Herzig et al., 2020a) Tapex (Liu et al., 2021) Question Generation Table QG (Chemmengath et al., 2021) Passage QG Table+Passage QG Table 1: A non-exhaustive list of core PRIMEQA models for the three main supported tasks (left) and their various extensions (right) available on our Hugging Face model hub: https://huggingface.co/PrimeQA. * SOTA leaderboard systems. - **Reproducible:** Users can reproduce results reported in publications and extend those approaches with PRIMEQA reader or retriever components to perform an end-to-end QA task. The PRIMEQA components are listed in Table 1. - **Customizable:** We allow users to customize and extend SOTA models for their own applications. This often entails fine-tuning on users custom data. - **Reusable:** We aim to make it straightforward for developers to quickly deploy pre-trained off-theshelf PRIMEQA models for their QA applications, requiring minimal code change. - **Accessible:** We provide easy integration with Hugging Face Datasets and the Model Hub, allowing users to quickly plug in a range of datasets and models as shown in Table 1. PRIMEQA in its entirety is a collection of four different repositories: a primary *research and replicability*3repository and three accompanying repositories4,5,6for industrial deployment. Figure 1 shows a diagram of the PrimeQA repository. It provides several entry points, supporting the needs of different users, as shown at the top of the figure. The repository is centered around three core components: a **retriever**, a **reader**, and a **question generator** for data augmentation. These components can be used as individual modules or assembled 3https://github.com/primeqa/primeqa 4https://github.com/primeqa/create-primeqa-app 5https://github.com/primeqa/primeqa-orchestrator 6https://github.com/primeqa/primeqa-ui ![2_image_0.png](2_image_0.png) into an end-to-end QA pipeline. All components are implemented on top of existing AI libraries. ## 3.1 The Core Components Each of the three core PRIMEQA components supports different flavors of its corresponding task, as we detail in this section. ## 3.1.1 Retriever: Run_Ir.Py Retrievers predict documents (or passages) from a document collection that are relevant to an input question. PRIMEQA has both sparse and SOTA dense retrievers along with their extensions, as shown in Table 1. We provide a single Python script run_ir.py that can be passed arguments to switch between different retriever algorithms. Sparse: BM25 (Robertson and Zaragoza, 2009) is one of the most popular sparse retrieval methods, thanks to its simplicity, efficiency and robustness. Our Python-based implementation of BM25 is powered by the open-source library PySerini. Dense: Modern neural retrievers have utilized dense question and passage representations to achieve SOTA performance on various benchmarks, while needing GPUs for efficiency. We currently support ColBERT (Santhanam et al., 2022b) and DPR (Karpukhin et al., 2020): both fine-tune pretrained language models to train question and passage encoders (Devlin et al., 2019; Conneau et al., 2020). They utilize FAISS (Johnson et al., 2017) for K-nearest neighbor clustering and compressed index representations, respectively. They support multilingual retrieval with the question and the documents being in the same (Lee et al., 2019; Longpre et al., 2021) or different languages (cross-lingual) (Asai et al., 2021). ## 3.1.2 Reader: Run_Mrc.Py Given a question and a retrieved passage—also called the *context*—a reader predicts an answer that is either extracted directly from the context or is generated based on it. PRIMEQA supports training and inference of both extractive and generative readers through a single Python script: run_mrc.py. It works out-of-the-box with different QA models extended from the Transformers library (Wolf et al., 2020). Extractive: PRIMEQA's general extractive reader is a pointer network that predicts the start and end of the answer span from the input context (Devlin et al., 2019; Alberti et al., 2019b). It can be initialized with most large pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020). In addition, our reader is extremely versatile as it can provide responses to questions with list answers (Khashabi et al., 2021), *yes/no* responses to Boolean questions (Clark et al., 2019, 2020a; Kwiatkowski et al., 2019b), answer spans found in tables (Herzig et al., 2020b) and in multimodal (text+image) documents (Mathew et al., 2021). Examples of several extractive readers along with their extensions are provided in Table 1. Generative: PRIMEQA provides generative readers based on the popular Fusion-in-Decoder (FiD) (Izacard and Grave, 2020) algorithm. Currently, it supports easy initialization with large pre-trained sequence-to-sequence models (Lewis et al., 2019; Raffel et al., 2022). With FiD, the question and the retrieved passages are used to generate relatively long and complex multi-sentence answers providing support for long form question answering tasks, e.g., ELI5 (Petroni et al., 2021; Fan et al., 2019). ## 3.1.3 Question Generation: Run_Qg.Py Data augmentation through synthetic question generation (QG) helps in generalization of QA models (Alberti et al., 2019a; Sultan et al., 2020), especially when labeled data is not available for the target domain. It can be applied in a variety of settings, including domain adaptation (Shakeri et al., 2021; Gangi Reddy et al., 2021, 2022), domain generalization (Sultan et al., 2022) and few-shot learning (Yue et al., 2022). PRIMEQA's QG component (Chemmengath et al., 2021) is based on SOTA sequence-to-sequence generation architectures (Raffel et al., 2022), and supports both unstructured and structured input text through a single Python script run_qg.py. Unstructured Input: Our first variant of QG is a multilingual text-to-text model capable of generating questions in the language of the input passage. It fine-tunes a pre-trained T5 language model (Raffel et al., 2022) on publicly available multilingual QA data (Clark et al., 2020b). Structured Input: Our second variant learns QG over tables by fine-tuning T5 (Raffel et al., 2022) to generate natural language queries using the Table QA dataset (Zhong et al., 2017a). Given a table, PRIMEQA uses a controllable SQL sampler to obtain SQL queries and then applies the trained table QG model to generate natural language questions. Semi-structured Input: PRIMEQA also supports QG over tables and text by fine-tuning T5 (Raffel et al., 2022) to generate natural language queries from table+text context. The training data is obtained using the publicly available HybridQA dataset (Chen et al., 2020). ## 3.2 Entry Points We cater to different user groups in the QA community by providing different entry points to PRIMEQA, as shown in Figure 1. - **Top-level Scripts:** Researchers can use the top level scripts, run_{ir/mrc/qg}.py, to reproduce published results and train, fine-tune and evaluate associated models on their own custom data. - **Jupyter Notebooks:** These demonstrate how to use built-in classes to run the different PRIMEQA components and perform the corresponding tasks. They are useful for developers and researchers who want to reuse and extend PRIMEQA functionalities. - **Inference APIs:** The Inference APIs are primarily meant for developers, allowing them to use PRIMEQA components on their own data with only a few lines of code. These APIs can be initialized with the pre-trained PRIMEQA models provided in the HuggingFace hub, or with a custom model that has been trained for a specific use case. - **Service Layer:** The service layer helps developers set up an end-to-end QA system quickly by providing a wrapper around the core components that exposes an endpoint and an API. - UI: The UI is for end-users, including the nontechnical layman who wants to use PRIMEQA services interactively to ask questions and get answers. ## 3.3 Pipelines For Openqa PRIMEQA users can build an OpenQA *pipeline* and configure it to use any of the PRIMEQA retrievers and readers in a plug-and-play fashion. This is facilitated through a lightweight wrapper built around each core component, which implements the inference API (one of the PRIMEQA entry points). An example of such a pipeline can be connecting a ColBERT retriever to a generative reader based on LLMs such as those in the GPT series (Brown et al., 2020; Ouyang et al., 2022) or FLAN-T5 (Chung et al., 2022), providing retrieval-augmented generative QA capabilities. The retriever in this setting can provide relevant passages that can constitute part of the prompt for the LLM; this encourages answer generation grounded in those retrieved passages, reducing hallucination. Other pipelines can also be instantiated to use different retrievers (e.g., DPR, BM25) and readers (e.g., extractive, FiD) that are available through our model hub. ## 4 Services And Deployment Industrial deployment often necessitates running complex models and processes at scale. We use Docker to package these components into microservices that interact with each other and can be ported to servers with different hardware capabilities (e.g. GPUs, CPUs, memory). The use of Docker makes the addition, replacement or deletion of services easy and scalable. All components in the PRIMEQA repository are available via REST and/or gRPC micro-services. Our Docker containers are available on the public DockerHub and can be deployed using technologies such as OpenShift ![4_image_0.png](4_image_0.png) and Kubernetes. In addition to the main PrimeQA repository, we provide three sibling repositories for application deployment: primeqa-ui is the front-end UI. Users can personalize this by adding custom organization logos or changing display fonts. primeqa-orchestrator is a REST server and is the central hub for the integration of PRIMEQA services and external components and the execution of a pipeline. For instance, the orchestrator can be configured to search a document collection with either a retriever from PrimeQA such as ColBERT, or an external search engine such as Watson Discovery.7 create-primeqa-app provides the scripts to launch the demo application by starting the orchestrator and UI services. Figure 2 illustrates how to deploy a QA application at scale using the core PrimeQA services (e.g. Reader and Retriever) and our three sibling repositories. We provide this end-to-end deployment for our demo, however users can also utilize PrimeQA as an application with their own orchestrator or UI. Figure 3 shows an OpenQA demo application built with the PRIMEQA components. In addition to providing answers with evidence, our demo application features a mechanism to collect user 7https://www.ibm.com/cloud/watson-discovery feedback. The *thumbs up / down* icons next to each result enables a user to record feedback which is then stored in a database. The user feedback data can be retrieved and used as additional training data to further improve a retriever and reader model. ## 5 Community Contributions While being relatively new, PRIMEQA has already garnered positive attention from the QA community and is receiving constant successful contributions from both international academia and industry via Github pull requests. We describe some instances here and encourage further contributions from all in the community. We provide support for those interested in contributing through a dedicated slack channel 8, Github issues and PR reviews. Neural Retrievers: ColBERT, one of our core neural retrievers, was contributed by Stanford NLP. Since PRIMEQA provides very easy entry points into its core library, they were able to integrate their software into the retriever script run_ir.py independently. Their contribution to PRIMEQA provides SOTA performance on OpenQA benchmarks by performing 'late interaction' search on a variety of datasets. They also contributed ColBERTv2 (Santhanam et al., 2022b) and its PLAID (Santhanam et al., 2022a) variant. The former reduces ColBERT index size by 10x while the latter makes search faster by almost 7x on GPUs. 8https://ibm.biz/pqa-slack ![5_image_0.png](5_image_0.png) Few-shot Learning: The SunLab from Ohio State University added few-shot learning capabilities within PRIMEQA . Their contribution, ReasonBERT (Deng et al., 2021), provides a pretrained methodology that augments language models with the ability to reason over long-range relations. Under the few-shot setting, ReasonBERT in PRIMEQA substantially outperforms RoBERTa (Liu et al., 2019)-based QA systems. PRIMEQA gives any researcher or developer the capability to easily integrate this component in their custom search application e.g. a DPR retriever and a ReasonBERT reader. Table Readers: Beihang University and Microsoft Research Asia contributed Tapex (Liu et al., 2021) as the first generative Table reader in PRIMEQA. Tapex proposes a novel table pre-training strategy based on a neural SQL executor and achieves SOTA on Wiki-SQL (Zhong et al., 2017a) and Wiki-TableQuestions (Pasupat and Liang, 2015a). They utilize the Transformers (Wolf et al., 2020) sequence-to-sequence trainer for seamless integration into PRIMEQA. LTI CMU's NeuLab contributed OmniTab (Jiang et al., 2022b), which employs an efficient pre-training strategy leveraging both real and synthetic data. This integration happened organically as OmniTab builds on top of Tapex in PRIMEQA. Currently, their model yields the best few-shot performance on WikiTableQuestions, making it also an appropriate candidate system under domain shift. Custom Search for Earth Science: NASA researchers created a custom search application for scientific abstracts and papers related to Earth Science which received global attention9. First, using the top level scripts in PRIMEQA, they trained an OpenQA system on over 100k abstracts by training a ColBERT retriever and an extractive reader. Then, they were able to quickly deploy the search application using the create-primeqa-app and make it available publicly10. ## 6 Conclusion PRIMEQA is an open-source Question Answering library designed by researchers and developers to easily facilitate reproduciblity and reusability of existing and future work in QA. This is an important contribution to the community, as it provides researchers and end users with easy access to state-ofthe-art algorithms in the rapidly progressing field of QA. PRIMEQA also provides off-the-shelf models that developers can directly deploy for their custom QA applications. PRIMEQA is built on top of the largest open-source NLP libraries and tools, can incorporate LLMs through external APIs, and provides simple Python scripts as entry points for easy reuse of its core components. This straightforward access and high reusability has already garnered significant traction in the community, enabling PRIMEQA to grow organically as an important resource for rapid progress in QA. ## Ethics And Broader Impact 6.1 Broader Impact PRIMEQA is developed as a one-stop and opensource QA repository with an aim to democratize QA research by facilitating easy replication and extension of state of the art methods in multilingual question answering and developments. QA is moving fast with the launch of state-of-the-art (SOTA) retrievers, readers and multi-modal QA models. However, there are two key hurdles which slow the adoption of the SOTA models by the community, which are (1) hard to reproduce for researchers and (2) involves a learning curve for developers to use in custom applications. PRIMEQA solves both the problems by providing multiple access points for different user groups for their easy adoption. PRIMEQA is licensed under Apache 2.0 and thus open to use in both academia and industry. Therefore, PRIMEQA can have an impact on the whole NLP community or more broadly any user working on NLP applications. ## 6.2 Ethical Considerations The models available in PRIMEQA might inherit bias based on available training data in public domain. Such bias, if any, is in general present in the models contributed to PRIMEQA and not specific to PrimeQA. Therefore, the usage of PRIMEQA should be approached with the same caution as with any NLP model. PRIMEQA supports easy access for researchers and developers to use state-of-the-art models and even customize them on their own data. However, PRIMEQA does not control the type of data the model will be exposed to in a custom environment. The general assumption is that these models will be used for rightful purposes in good faith. ## Acknowledgments We thank Lysandre Debut, Thomas Wolf and the Hugging Face team for their close collaboration with the PRIMEQA team to facilitate building PRIMEQA on top of the Hugging Face libraries and datasets. We also thank Chris Potts, Omar Khattab, Jon Saad-Falcon and Keshav Santhanam from Stanford University, Xiang Deng and Huan Sun from Ohio State University, Zhengbao Jiang from Carnegie Mellon University, Qian Liu from Beihang University, and Maximilian Schmidt from University of Stuttgart for contributing implementations of their state-of-the-art systems to PRIMEQA. ## References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019a. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168–6173, Florence, Italy. Association for Computational Linguistics. Chris Alberti, Kenton Lee, and Michael Collins. 2019b. A bert baseline for the natural questions. *arXiv* preprint arXiv:1901.08634. Akari Asai, Jungo Kasai, Jonathan Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021. XOR QA: Cross-lingual open-retrieval question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 547–564, Online. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *CoRR*, abs/2005.14165. Rishav Chakravarti, Anthony Ferritto, Bhavani Iyer, Lin Pan, Radu Florian, Salim Roukos, and Avirup Sil. 2020. Towards building a robust industry-scale question answering system. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 90–101. Saneem Chemmengath, Vishwajeet Kumar, Samarth Bharadwaj, Jaydeep Sen, Mustafa Canim, Soumen Chakrabarti, Alfio Gliozzo, and Karthik Sankaranarayanan. 2021. Topic transferable table question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4159–4172, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer opendomain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879, Vancouver, Canada. Association for Computational Linguistics. Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Wang. 2020. Hybridqa: A dataset of multi-hop question answering over tabular and textual data. *Findings of EMNLP* 2020. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instructionfinetuned language models. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the* Association for Computational Linguistics, 8:454– 470. Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020b. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. *Transactions of the Association for Computational Linguistics*, 8:454–470. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. ## Deepset. 2021. Haystack. Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, and Huan Sun. 2021. ReasonBERT: Pre-trained to reason with distant supervision. In *Proceedings of* the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6112–6127, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference* of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In *Proceedings of* the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Revanth Gangi Reddy, Bhavani Iyer, Md Arafat Sultan, Rong Zhang, Avirup Sil, Vittorio Castelli, Radu Florian, and Salim Roukos. 2021. Synthetic target domain supervision for open retrieval qa. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1793–1797. Revanth Gangi Reddy, Vikas Yadav, Md Arafat Sultan, Martin Franz, Vittorio Castelli, Heng Ji, and Avirup Sil. 2022. Towards robust neural retrieval with source domain synthetic pre-finetuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1065–1070, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. *arXiv preprint arXiv:1803.07640*. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Mueller, Francesco Piccinno, and Julian Eisenschlos. 2020a. Tapas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4320–4333. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020b. TaPas: Weakly supervised table parsing via pre-training. In *Proceedings of the 58th Annual* Meeting of the Association for Computational Linguistics, pages 4320–4333, Online. Association for Computational Linguistics. ## Ibm. 2020. Watson Assistant. Mohit Iyyer, Scott Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In *Proceedings of* the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. In *Conference of the European Chapter of the Association for Computational* Linguistics. Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. 2022a. Omnitab: Pretraining with natural and synthetic data for few-shot tablebased question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 932–942. Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. 2022b. OmniTab: Pretraining with natural and synthetic data for few-shot table-based question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 932– 942, Seattle, United States. Association for Computational Linguistics. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with gpus. *CoRR*, abs/1702.08734. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769– 6781, Online. Association for Computational Linguistics. Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sabharwal, Hannaneh Hajishirzi, and Chris CallisonBurch. 2021. GooAQ: Open question answering with diverse answer types. In *Findings of the Association for Computational Linguistics: EMNLP 2021*, pages 421–433, Punta Cana, Dominican Republic. Association for Computational Linguistics. Vishwajeet Kumar, Saneem Chemmengath, Yash Gupta, Jaydeep Sen, Samarth Bharadwaj, and Soumen Chakrabarti. 2021. Multi-instance training for question answering across table and linked text. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019a. Natural questions: A benchmark for question answering research. *Transactions of the Association* for Computational Linguistics, 7:452–466. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019b. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019c. Natural questions: a benchmark for question answering research. *Transactions of the Association of Computational Linguistics*. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In *Annual Meeting of the Association for Computational Linguistics*. Yulong Li, Martin Franz, Md Arafat Sultan, Bhavani Iyer, Young-Suk Lee, and Avirup Sil. 2022. Learning cross-lingual IR from an English retriever. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4428–4436, Seattle, United States. Association for Computational Linguistics. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2021. Tapex: Table pre-training via learning a neural sql executor. In International Conference on Learning Representations. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Shayne Longpre, Yi Lu, and Joachim Daiber. 2021. Mkqa: A linguistically diverse benchmark for multilingual open domain question answering. *Transactions of the Association for Computational Linguistics*, 9:1389–1406. Christopher D Manning. 2008. *Introduction to information retrieval*. Syngress Publishing,. Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. 2021. Docvqa: A dataset for vqa on document images. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pages 2200–2209. Scott McCarley, Mihaela Bornea, Sara Rosenthal, Anthony Ferritto, Md Arafat Sultan, Avirup Sil, and Radu Florian. 2023. Gaama 2.0: An integrated system that answers boolean and extractive question. In AAAI 2023. NVDIA. 2022. Tao toolkit. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. *arXiv preprint* arXiv:1904.01038. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Panupong Pasupat and Percy Liang. 2015a. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–1480, Beijing, China. Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015b. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470–1480, Beijing, China. Association for Computational Linguistics. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2523–2544, Online. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. 21(1). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In *Proceedings of the 56th Annual Meeting of the Association for Computational* Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Marcin-Elvis Guillem Andrew Robert, Ross and Thomas. 2022. Papers with code. In *Meta AI Research*. Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and beyond. *Foundations and Trends in Information Retrieval*, 3:333–389. Keshav Santhanam, Omar Khattab, Christopher Potts, and Matei Zaharia. 2022a. PLAID: An efficient engine for late interaction retrieval. In *Proceedings of* the 31st ACM International Conference on Information and Knowledge Management, CIKM '22, page 1747–1756, New York, NY, USA. Association for Computing Machinery. Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022b. ColBERTv2: Effective and efficient retrieval via lightweight late interaction. In *Proceedings of the* 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715–3734, Seattle, United States. Association for Computational Linguistics. Hinrich Schütze, Christopher D Manning, and Prabhakar Raghavan. 2008. Introduction to information retrieval, volume 39. Cambridge University Press Cambridge. Siamak Shakeri, Noah Constant, Mihir Kale, and Linting Xue. 2021. Towards zero-shot multilingual synthetic question and answer generation for crosslingual reading comprehension. In *Proceedings of* the 14th International Conference on Natural Language Generation, pages 35–45, Aberdeen, Scotland, UK. Association for Computational Linguistics. Md Arafat Sultan, Shubham Chandel, Ramón Fernandez Astudillo, and Vittorio Castelli. 2020. On the importance of diversity in question generation for QA. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5651–5656, Online. Association for Computational Linguistics. Md Arafat Sultan, Avirup Sil, and Radu Florian. 2022. Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering. In *EMNLP*. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing:* System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Xiang Yue, Ziyu Yao, and Huan Sun. 2022. Synthetic question value estimation for domain adaptation of question answering. In ACL. Victor Zhong, Caiming Xiong, and Richard Socher. 2017a. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. Victor Zhong, Caiming Xiong, and Richard Socher. 2017b. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. ## A Appendix A.1 Primeqa Applications Figure 4 shows a screen-shot of three **PrimeQA** applications. Tables 2 and 3 provide lists of supported datasets and some important PRIMEQA links. Datasets OpenNQ XOR-TyDi (Asai et al., 2021) SQuAD (Rajpurkar et al., 2016) TyDiQA (Clark et al., 2020b) NQ (Kwiatkowski et al., 2019c) ELI5 SQA (Iyyer et al., 2017) WTQ (Pasupat and Liang, 2015b) DocVQA (Mathew et al., 2021) WikiSQL (Zhong et al., 2017b) Table 2: A list of some of the supported datasets in PrimeQA | Retriever | Simple Python script | |-----------------|------------------------| | Reader | Inference APIs | | Unstructured QG | Inference APIs | | Pipeline | Inference APIs | Table 3: Links to PrimeQA ![11_image_0.png](11_image_0.png)
zhang-etal-2023-lingxi
Lingxi: A Diversity-aware {C}hinese Modern Poetry Generation System
https://aclanthology.org/2023.acl-demo.6
Chinese modern poetry generation has been a challenging task. One issue is the Chinese word segmentation (CWS) which is critical to comprehend the Chinese language but was not always considered in common tokenization methods. Another is the decoding (sampling) method which may induce repetition and boredom and severely lower the diversity of the generated poetry. To address these issues, we present Lingxi, a diversity-aware Chinese modern poetry generation system. For the CWS issue, we propose a novel framework that incorporates CWS in the tokenization process. The proposed method can achieve a high vocabulary coverage rate with a reasonable vocabulary size. For the decoding method and the diversity issue, we propose a novel sampling algorithm that flattens the high likelihood part of the predicted distribution of the language model to emphasize the comparatively low-likelihood words and increase the diversity of generated poetry. Empirical results show that even when the top 60{\%} of cumulative probability mass of the predicted distribution is flattened, our method achieves comparable or even better performance than baseline sampling methods. Our system is available at \url{http://lingxi.website}.
## Lingxi**: A Diversity-Aware Chinese Modern Poetry Generation System** Xinran Zhang1, 2, 3, Maosong Sun1, 2, 3, 4∗, Jiafeng Liu1, 2, 3**, Xiaobing Li**1, 2, 3 1Department of AI Music and Music Information Technology, Central Conservatory of Music 2Key Laboratory of Music and Brain Science, Central Conservatory of Music, Ministry of Education, China 3Laboratory of AI Music, Central Conservatory of Music, Laboratory of Philosophy and Social Sciences, Ministry of Education, China 4Department of Computer Science and Technology, Tsinghua University zhangxr.wspn@gmail.com, sms@tsinghua.edu.cn ## Abstract Chinese modern poetry generation has been a challenging task. One issue is the Chinese word segmentation (CWS) which is critical to comprehend the Chinese language but was not always considered in common tokenization methods. Another is the decoding (sampling) method which may induce repetition and boredom and severely lower the diversity of the generated poetry. To address these issues, we present Lingxi, a diversityaware Chinese modern poetry generation system. For the CWS issue, we propose a novel framework that incorporates CWS in the tokenization process. The proposed method can achieve a high vocabulary coverage rate with a reasonable vocabulary size. For the decoding method and the diversity issue, we propose a novel sampling algorithm that flattens the high likelihood part of the predicted distribution of the language model to emphasize the comparatively low-likelihood words and increase the diversity of generated poetry. Empirical results show that even when the top 60% of cumulative probability mass of the predicted distribution is flattened, our method achieves comparable or even better performance than baseline sampling methods. Our system is available at http://lingxi.website†. ## 1 Introduction Chinese modern poetry generation has been a challenging natural language processing (NLP) task. One issue is the *Chinese word segmentation* (CWS) problem. Unlike English words that are naturally delimited by white spaces, Chinese words do not have explicit word delimiters. Since different CWS strategies may completely change the semantics of Chinese words, CWS is always crucial for humans to comprehend the Chinese language and is regarded as a critical Chinese NLP task. Despite its *Corresponding author. †Video demonstration is available at https:// youtu.be/ofNTZFCM4DQ. importance, CWS is always ignored in benchmark tokenization methods such as byte pair encoding (BPE, Gage, 1994; Sennrich et al., 2016) or unigram language model (Unigram LM, Kudo, 2018), as well as in recent researches on Chinese poetry or lyric generation (e.g., systems by Lee et al., 2019, Zhang et al., 2020, and Zhang et al., 2022). A critical issue is that the rendered vocabulary from CWS tends to exhibit an extremely long "tail", which demands additional techniques to process before being incorporated into the language model. Another challenge is the high diversity requirement for poetry generation. The task is unique compared with common neural generation tasks, which try to predict the correct answers from the training corpus and focus on the fluency of the generated texts. Concretely, a piece of easily understood and highly fluent poetry with high-likelihood words might not always be considered poetic, while semantically ambiguous poetry with low-likelihood words might be diversified and creative. We consider the diversity issue from the view of the decoding (sampling) algorithm of the language model. The widely acknowledged golden methods include the nucleus sampling (Holtzman et al., 2020), top-k sampling (Fan et al., 2018; Holtzman et al., 2018), and temperature sampling, which have been proved to be able to control the quality and diversity of generated texts. However, in our system, these golden methods might occasionally generate boring and repetitive samples with low diversity, severely hurting the quality of the generated poetry. Thus it inspires us to explore novel decoding algorithms to address the diversity issue for poetry generation. To address these challenges, we present Lingxi, a diversity-aware Chinese modern poetry generation system with the following features. For the CWS issue, we propose a novel framework that incorporates CWS in the tokenization process. The proposed method not only leverages the human knowledge from the CWS model but also com63 bines the advantage of frequency-based tokenization methods, which can achieve a high coverage rate of the vocabulary while maintaining a reasonable vocabulary size. For the high-diversity issue, we propose a novel decoding (sampling) method, namely the *nucleus sampling with flattened head* (NS-FH) algorithm. It flattens the high-likelihood part of the predicted distribution to suppress the high-likelihood words and emphasize the less likely words to improve the diversity of the generated poetry. Surprisingly, we find that even if the top 60% of cumulative probability mass of the vocabulary's distribution is flattened, the model achieves comparable or even better performance than baseline methods. ## 2 The Cws Framework For Chinese Modern Poetry Corpus 2.1 The "Long-Tail" Issue Of Cws ![1_Image_0.Png](1_Image_0.Png) Following the pre-training/fine-tuning paradigm, we collect about 3,500 published books of Chinese novels as the pre-training corpus, aiming at Chinese literary language modeling. Then we collect about 220,000 passages of Chinese modern poetry and lyrics as the fine-tuning corpus. To build the vocabulary, most researchers directly apply frequency-based tokenization methods such as byte pair encoding (BPE, Gage, 1994; Sennrich et al., 2016) or unigram language model (Unigram LM, Kudo, 2018) without considering the Chinese word segmentation (CWS) issue, which is a critical Chinese NLP task and can be modeled by supervised learning (Liu et al., 2014; Yan et al., 2020; Qiu et al., 2020; Duan and Zhao, 2020). Despite these advances, it is still difficult to directly deploy CWS into the tokenization process due to the "long-tail" issue. Figure 1 illustrates the cumulative coverage rate of the vocabulary (sorted by the word frequency) of our corpus produced by the CWS software THULAC (Sun et al., 2016). It generates a large vocabulary (containing about 3.6 million words) with a very long low-frequency "tail", which grows slowly to the coverage rate of the vocabulary and takes a nonnegligible portion of coverage. In our case, the top 20,000 words take about 90% coverage on the corpus, and the remaining 3.6 million minus 20,000 words take the remaining 10% coverage, which satisfies Zipf's law (Zipf, 1949). Truncating the "tail" would result in a low coverage rate and too many "unknown" tokens. ## 2.2 Proposed Cws Framework To handle the "long-tail" issue, we propose the following heuristic algorithm, which segments the "tail" into "subwords" using words from the top portion of the vocabulary. Step 1: Use the CWS tool THULAC to process the corpus and acquire the vocabulary, sort it by the word frequency, and choose a top portion of the vocabulary as the *basic vocabulary*. Step 2: For the out-of-vocabulary (OOV) words, use the *basic vocabulary* to segment them into subwords. For subwords outside the basic vocabulary, add them to the basic vocabulary. If an OOV word has different segmentation strategies, determine by choosing the largest likelihood product of subwords. Step 3: Sort the *expanded* basic vocabulary, and choose a top portion as the *final vocabulary*. Let V denote the vocabulary of the corpus produced by the CWS model. For each word w ∈ V , its frequency and likelihood are denoted by n(w) and p(w), which satisfy p(w) = n(w)/Pw∈V n(w). Sort V by p(w), then choose the top portion of the vocabulary with cumulative coverage rate being P1 (referred to as "top P1") to construct the basic vocabulary V*BASIC*. The rest of words in V ("tail") are denoted by V *BASIC*. To process V *BASIC*, segment each word in V *BASIC* using words in V*BASIC*. During this process, a word might have different segmentation strategies. To solve the segmentation ambiguity, let S(w) = {w1, w2, ...}, wi ∈ V*BASIC* denote a segmentation of word w ∈ V *BASIC* with an ordered sequence of token wi. We choose the segmentation strategy with the largest likelihood product, ![2_image_0.png](2_image_0.png) i.e., argmaxS(w) Qw∈S(w) p(w). If any word outside V*BASIC* is found in the segmentation, add it to V*BASIC*. After all words in V *BASIC* are processed, V*BASIC* will be expanded to cover all the corpus. Sort the expanded V*BASIC* by its updated word likelihood and choose the top P2 of the vocabulary as the final vocabulary V*F INAL*. The process is described in Algorithm 1. Algorithm 1 The proposed CWS Framework Input: Training Corpus Output: V*F INAL* 1: Acquire V by THULAC, sort V by p(w), choose top P1 of V as V*BASIC*, and its complement as V *BASIC*. 2: for each w ∈ V *BASIC* do 3: Find all possible segmentations {S(w)} 4: S∗(w) ← argmaxS(w) Qw∈S(w) p(w) 5: for each w∗ ∈ S∗(w) do 6: Add w∗to VBASIC if w∗ ∈/ V*BASIC* 7: Update p(w). Sort V*BASIC* by p(w), choose top P2 of VBASIC as V*F INAL* 8: **return** V*F INAL* After all words in V *BASIC* are processed, all sub-words outside V*BASIC* will be added to the vocabulary. In this way, V*BASIC* will be expanded to cover all information in the corpus, and all OOV words can be segmented into in-vocabulary subwords. All information will be kept during this process with no "unknown" tokens. And in the last step, only sub-words with a very low word frequency that are rarely used will be filtered out and replaced with "unknown" tokens. In this way, the final vocabulary will achieve a high coverage rate with a suitable size. The entire process is shown in Figure 2. It combines the CWS model with frequency-based tokenization methods and can generate a vocabulary with suitable size and high coverage rate. In our case, the coverage rate of the final vocabulary is close to 1.0 (larger than 1 − 10−4, see Figure 1), with a vocabulary size being 17,589. Since we have two different training corpora, we seek to leverage the word frequency feature from both corpora. For **Step 1**, we choose the top 90% of the vocabulary from a) both corpora and b) the fine-tuning corpus only, then merge them as the basic vocabulary. This emphasizes the finetuning corpus to benefit its generation task. The size of basic vocabulary merged in **Step 1** is 12,643. After **Step 2**, the vocabulary size is expanded to 23,296. In **Step 3**, we drop words with a frequency lower than a) 100 on both corpora and b) 10 on the fine-tuning corpus and acquire the final vocabulary with 17,589 words. By observation, the dropped words in **Step 3** are all extremely rare single-length Chinese words. ## 3 The Diversity-Aware Sampling We train an auto-regressive Transformer language model on the two collected corpus for generation (see the appendices for the model details). As is widely acknowledged, the decoding module plays a crucial role in neural text generation. Stochastic sampling methods such as nucleus sampling (Holtzman et al., 2020), top-k sampling (Fan et al., 2018; Holtzman et al., 2018) and temperature sampling can generate texts with higher diversity than traditional decoding methods such as beam search. Recent advances such as MIROSTAT (Basu et al., 2021) and typical sampling (Meister et al., 2023) use adaptive filtering on top of these methods. However, in our model, we observe that traditional algorithms occasionally generate boring and repetitive poetry with low diversity, thus severely hurting the quality of generated poetry. ![3_image_0.png](3_image_0.png) Recent research has revealed that the lowdiversity issue can stem from the high-likelihood part of the predicted distribution (Holtzman et al., 2020; Basu et al., 2021), which is not fully leveraged by traditional sampling methods. We consider the intuition for poetry composing that fluent poetry with too many high-likelihood words can be boring (i.e., the boredom trap, Basu et al., 2021); in contrast, semantically ambiguous poetry with surprising and low-likelihood words can be creative and poetic (e.g., poems by James Joyce or Marcel Proust). Inspired by this, we propose a novel sampling algorithm to emphasize the less likely words on the predicted distribution to increase the diversity. We leverage the notion of nucleus sampling (NS) by defining an additional filtering parameter denoted by q to identify the high-likelihood part ("head") of the vocabulary, denoted by V head. Then we propose to flatten and evenly redistribute the probability mass for V head, to emphasize the "comparatively low likelihood" words in the "head". For the low-likelihood part of the distribution ("tail"), we adopt nucleus sampling with parameter p (p ≥ q) to truncate the "tail" like the tradition. Stochastic sampling is conducted on the flattened and rescaled distribution for all sampling steps. The above method is referred to as *nucleus sampling with flattened head* (NS-FH) algorithm. The diversity gain of the algorithm is controlled by q, which determines the boundary of V head. The algorithm is illustrated in Figure 3. The proposed algorithm has a close relationship with nucleus sampling. Their truncation mechanisms on the predicted distribution are identically based on the threshold {*p, q*} of the cumulative probability density function. And our method features in using an additional threshold q to determine another smaller portion of the predicted distribution and flattening their probability density function. And since q is also a threshold for the cumulative probability density function, the flattening manipulation will not affect low-entropy distributions that only contain an "unquestionably correct" word which takes most of the probability mass, thus not hurting the fluency of the generated poetry. We later show in the empirical results that the proposed method can closely follow the behavior of nucleus sampling when using a small value of q while exhibiting higher diversity when setting a large value of q, achieving diversity-aware sampling. ## 4 Demonstration And Evaluation 4.1 System Interface The system interface and poetry example are shown in Figure 4. We provide a plain mode and an advanced mode. For the plain mode, input the poetry prompt, choose the diversity parameter with the slider, and hit the "submit" button to get a generated poetry. By default, the system uses the proposed NS-FH algorithm as the decoding method. In the advanced mode, we implemented the three golden sampling methods (nucleus sampling, top-k sampling, and temperature sampling) as well as two novel sampling methods (MIROSTAT by Basu et al., 2021 and typical sampling by Meister et al., 2023) for users to choose and compare. ## 4.2 Impact Of Sampling Algorithms It is noteworthy that the performance of the sampling algorithm is highly dependent on the sampling hyperparameter. To illustrate and compare their impact on the generated poetry, we take advantage of the quality-diversity trade-off feature (Nadeem et al., 2020; Zhang et al., 2021; Basu et al., 2021). By tuning the hyperparameter for one sampling algorithm, the quality (fluency) metric and the diversity metric of the generated poetry form a trade-off curve in which an increase in the diversity metric will decrease the fluency metric. ![4_image_0.png](4_image_0.png) Such a feature demands a holistic picture of the trade-off curve by traversing the hyperparameter space and evaluating automatic metrics on the generated poetry to reveal the nature of the algorithm. As a result, we leverage the trade-off feature by aligning corresponding metrics on a 2D plane throughout our evaluation. Concretely, we use every poetry title from the validation set as the input prompt for generation, which results in 15,218 generations per sampling algorithm per hyperparameter. We choose the fluency metric as the perplexity (PPL) of generated poetry (lower score indicates higher fluency but more boredom). We then choose the following three diversity metrics: the Zipf coefficient (Zipf, 1949; Newman, 2005) which reflects the sloping tendency of the word frequency distribution (lower score indicates a flatter distribution and higher diversity); the entropy of n-gram distribution (Zhang et al., 2018) which reflects the diversity of n-grams (higher score indicates less repeating *n-grams* and higher diversity); the self-BLEU score (Zhu et al., 2018; Holtzman et al., 2020) which reflects the overlapping tendency among different generated poetry passages (lower score indicates less overlapping and higher diversity). We calculate the human-level metrics from the validation set as the reference point. Results for the fluency-diversity trade-off curves are shown from Figure 5 to Figure 7. We show regions around the human (reference) metric point for a clear view (see Figure 10 to Figure 14 in the appendix for full curves). Results show that the human-level metric of Chinese modern poetry ![4_image_1.png](4_image_1.png) suggests a **high diversity requirement** (low Zipf coefficient, high entropy of n-gram, and low selfBLEU score). To achieve human-level diversity, traditional methods must greatly relax their sampling parameter and almost degrade to pure sampling (sampling on the original predicted distribution with no filtering) for maximum diversity. By contrast, our method controls the trade-off curve by tuning the diversity parameter q without fully relaxing the filtering. When using a small value of q = 0.20, our method closely follows the trajectory ![5_image_0.png](5_image_0.png) ![5_image_2.png](5_image_2.png) ![5_image_3.png](5_image_3.png) of nucleus sampling and converges to the reference point; when using a large value of q = 0.60, our method achieves a comparable or higher diversity metric to traditional methods without deviating from the reference point. These results indicate that our method can achieve diversity-aware sampling with good metric performance. Note a very interesting behavior of the NS-FH algorithm that by setting q = 0.60 (colored in green), i.e., the top 60% of cumulative probability mass of the predicted distribution is completely flattened and ignored, the trade-off curve does not drift away from the human-level metric point (see the *performance degradation* of sampling algorithms illustrated in Figure 3 by Nadeem et al., 2020), while even achieving higher diversity metrics and boundaries in most cases. This suggests that in our task with a high requirement for diversity, the probability of high-likelihood words on a "flat" distribution with ![5_image_1.png](5_image_1.png) high entropy can be *ignored*. Completely flattening the distribution and ignoring the predicted probability of high-likelihood words can counter-intuitively achieve comparable or better results with higher diversity. By inference, such a method might be suitable for other artistic generation tasks like music or drawings that require high diversity. Also, note the stochastic nature of the sampling methods, which yields a stochastic trajectory of tokens with variational quality. Under the same sampling parameter and conditions, the generated poetry might occasionally be repetitive and wordy. So we focus on the distribution of the entropy of n-gram metric to investigate the repetition tendency. We calculate p(Ent-4 < η), which reflects the chance that the generated poetry has the entropy of n-grams lower than the threshold η. Empirically, we set η = 4 to get a meaningful observation. Results are shown in Figure 8. They show that naive sampling parameters for traditional methods easily result in more repetition, e.g., nucleus sampling with p = 0.80 has p(Ent-4 < 4) ≈ 0.24. Similar to previous results, traditional methods have to greatly relax the filtering parameter or degrade to pure sampling to approach human-level repetition tendency. By contrast, our method with q = 0.60 achieves a lower curve (less repetition chance) than most methods and also stay close to the reference point. This indicates that our method with a high diversity parameter can grant less repetition as well as maintain a close relationship of repetition tendency to human behavior without fully relaxing the filtering. ## 4.3 Rhyming Feature ![6_Image_0.Png](6_Image_0.Png) We also consider an additional metric to reflect the rhyming feature of the generated poetry. Similar to the entropy metric, we calculate the entropy of rhyming word distribution by E{− log p*rhyme*(x)}, where p*rhyme*(x) is the rhyming word frequency distribution (higher score indicates higher diversity but less rhymed). Similarly, we plot the fluency-rhyming trade-off curves in Figure 9. The results show that our method achieves on-par rhyming performance with all baseline methods. It indicates that the flattening manipulation of our method does not hurt the rhyming feature of the generated poetry, despite that our method flattens the top portion of the cumulative probability mass of the predicted distribution. Since rhyming is an important feature for composing Chinese modern poetry, we provide an additional function to control the rhyme of the generated poetry. In the advanced mode of LingXi, we adopt a re-ranking and replacing mechanism for rhyming. During the generation process, once a [NEWLINE] symbol was generated, we try to replace its previous token with a rhyming one. We create hypotheses by combining each token from the vocabulary that fits the requirement of the chosen rhyme with a [NEWLINE] token, recalculate the average perplexity of the combined tokens (hypothesis), and choose the hypothesis with the lowest average perplexity to replace the generated one. One issue is that there might be cases in which all hypotheses have high perplexity (low likelihood). So we set an additional threshold that the hypothesis must have an average perplexity lower than the threshold to trigger the replacement. If none of the hypotheses meet the requirement of the threshold, the generation remains unrevised. The above method can achieve a trade-off between rhyming and fluency by tuning the perplexity threshold. Empirically, we set the threshold to 50 to achieve a good performance. Although the method cannot guarantee that all generated lines of poetry meet the rhyming requirement, it features in its flexibility to plug into all decoding methods implemented in our system without modifying the language model or the algorithms. ## 5 Conclusion We present Lingxi, a diversity-aware Chinese modern poetry generation system. To address the CWS issue, we propose a novel framework that combines CWS with the frequency-based method, which can create a vocabulary with a suitable size and high coverage. To increase the diversity of generated poetry, we propose nucleus sampling with flattened head (NS-FH) algorithm which achieves controllable diversity with on-par or better performance compared to traditional sampling methods. The proposed sampling algorithm provides a new approach to increase the diversity of neural text generation via the decoding module, which might be beneficial for artistic generation cases that have high requirements for the diversity or novelty. ## Acknowledgments This work has been funded by: (i) Special Program of National Natural Science Foundation of China (Grant No. T2341003), (ii) Advanced Discipline Construction Project of Beijing Universities, (iii) Major Program of National Social Science Fund of China (Grant No. 21ZD19), (iv) Nation Culture and Tourism Technological Innovation Engineering Project (Research and Application of 3D Music), (v) National Philosophy and Social Science Project (Grant No. 22VJXG012). ## References Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, and Lav R. Varshney. 2021. MIROSTAT: A neural text decoding algorithm that directly controls perplexity. In *International* Conference on Learning Representations. Sufeng Duan and Hai Zhao. 2020. Attention is all you need for Chinese word segmentation. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)*, pages 3862–3872, Online. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Philip Gage. 1994. A new algorithm for data compression. *C Users J.*, 12(2):23–38. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In *International Conference on Learning Representations*. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638–1649, Melbourne, Australia. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics. Hsin-Pei Lee, Jhih-Sheng Fang, and Wei-Yun Ma. 2019. iComposer: An automatic songwriting system for Chinese popular music. In *Proceedings of* the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 84–88, Minneapolis, Minnesota. Association for Computational Linguistics. Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for CRF-based Chinese word segmentation using free annotations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 864–874, Doha, Qatar. Association for Computational Linguistics. Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2023. Locally Typical Sampling. *Transactions of the Association for Computational Linguistics*, 11:102–121. Moin Nadeem, Tianxing He, Kyunghyun Cho, and James Glass. 2020. A systematic characterization of sampling algorithms for open-ended language generation. In *Proceedings of the 1st Conference of the* Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 334–346, Suzhou, China. Association for Computational Linguistics. Mark EJ Newman. 2005. Power laws, pareto distributions and zipf's law. *Contemporary physics*, 46(5):323–351. Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2020. A concise model for multi-criteria Chinese word segmentation with transformer encoder. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pages 2887–2897, Online. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Maosong Sun, Xinxiong Chen, Kaixu Zhang, Zhipeng Guo, and Zhiyuan Liu. 2016. Thulac: An efficient lexical analyzer for chinese. http://thulac. thunlp.org/. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtow- ´ icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. *ArXiv*, abs/1910.03771. Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. A graph-based model for joint Chinese word segmentation and dependency parsing. Transactions of the Association for Computational Linguistics, 8:78–92. Hugh Zhang, Daniel Duckworth, Daphne Ippolito, and Arvind Neelakantan. 2021. Trading off diversity and quality in natural language generation. In *Proceedings of the Workshop on Human Evaluation of NLP* Systems (HumEval), pages 25–33, Online. Association for Computational Linguistics. Le Zhang, Rongsheng Zhang, Xiaoxi Mao, and Yongzhu Chang. 2022. QiuNiu: A Chinese lyrics generation system with passage-level input. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 76–82, Dublin, Ireland. Association for Computational Linguistics. Rongsheng Zhang, Xiaoxi Mao, Le Li, Lin Jiang, Lin Chen, Zhiwei Hu, Yadong Xi, Changjie Fan, and Minlie Huang. 2020. Youling: an AI-assisted lyrics creation system. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 85–91, Online. Association for Computational Linguistics. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In *The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval*, SIGIR '18, pages 1097–1100, New York, NY, USA. Association for Computing Machinery. George K. Zipf. 1949. *Human Behaviour and the Principle of Least Effort*. Addison-Wesley. ## A Model Configuration, Training Details And Supplementary Results Detailed configurations and parameters of our model are listed in Table 1. For special tokens, the [TITLE] token is added directly after the title of each poetry passage in the finetuning corpus to capture the title feature. The [START-OF-PASSAGE] token is added before the starting token of the first poetry line. We create a replica for each poetry passage excluding the title and [TITLE] token, and mix them with the original corpus as data augmentation. The [NEWLINE] token is added at the end of each poetry line in replace of the newline character, and the [END-OF-PASSAGE] token is added at the end of each poetry passage. English words and letters are assigned [UNK-EW] tokens. Other unknown words and sub-words are assigned [UNK] tokens. For poetry passages longer than the maximum context length, we create training samples using a sliding window with stride being half of the maximum context length. We split train/validation/test sets using the common ratio of 85%/7%/8% (token ratio for the pre-training corpus, passage ratio for the fine-tuning corpus). The model is an auto-regressive Transformer decoder, using cross entropy as the training loss. The training process is developed using the Huggingface library by Wolf et al. (2019). It achieves monotonic convergence of perplexity (PPL) on the validation set of the pre-training corpus at the end of the pre-training steps. We choose the best finetuning epoch of the model with the lowest PPL on the validation set of the fine-tuning corpus as the final model. For the generated poetry, their full metric trade-off curves for all sampling hyperparameters and sampling algorithms are shown from Figure 10 to Figure 14. | Parameters | Value | |------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | number of Transformer layers | 24 | | number of Transformer attention heads | 16 | | embedding size | 1,024 | | vocabulary size | 17,589 | | maximum context length | 128 | | number of network parameters | 330 million | | pre-training epochs | 20 | | fine-tuning epochs | 10 | | batch size per GPU | 32 | | number of training GPUs | 8 NVIDIA R GeForce R RTX 2080 Ti | | pre-training learning rate | 2 × 10−4 | | fine-tuning learning rate | {1 × 10−5 , 2 × 10−5} | | learning rate decay | linear decay | | warm-up steps | 1% of total steps | | optimizer | Adam optimizer (Kingma and Ba, 2014) | | weight decay | 0.01 | | PPL on validation set after pre-training | 17.58 | | best fine-tuning epoch | epoch 4, learning rate being 2 × 10−5 | | best PPL on fine-tuning validation set | 16.75 p ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 1} q ∈ {0.2, 0.4, 0.6}, p ≥ q k ∈ {10, 20, 50, 100, 200, 500, 1000, 2000} t ∈ {0.6, 0.7, 0.8, 0.9, 1.1} m ∈ {2, 3, 4, 5, 6} τ ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99} | | full sampling hyperparameter space for the trade-off curves from Figure 10 to Figure 14. | | Table 1: Model configuration, training details and sampling parameters ![10_image_0.png](10_image_0.png) ![11_image_0.png](11_image_0.png) ![11_image_1.png](11_image_1.png) 4.0 EU44 ![11_image_3.png](11_image_3.png) ![11_image_2.png](11_image_2.png) Self-BLEU ( ![11_image_4.png](11_image_4.png) ![11_image_5.png](11_image_5.png) ![12_image_0.png](12_image_0.png)
du-etal-2023-autodive
Autodive: An Integrated Onsite Scientific Literature Annotation Tool
https://aclanthology.org/2023.acl-demo.7
Scientific literature is always available in Adobe{'}s Portable Document Format (PDF), which is friendly for scientists to read. Compared with raw text, annotating directly on PDF documents can greatly improve the labeling efficiency of scientists whose annotation costs are very high. In this paper, we present Autodive, an integrated onsite scientific literature annotation tool for natural scientists and Natural Language Processing (NLP) researchers. This tool provides six core functions of annotation that support the whole lifecycle of corpus generation including i)annotation project management, ii)resource management, iii)ontology management, iv)manual annotation, v)onsite auto annotation, and vi)annotation task statistic. Two experiments are carried out to verify efficiency of the presented tool. A live demo of Autodive is available at \url{http://autodive.sciwiki.cn}. The source code is available at \url{https://github.com/Autodive}.
# Autodive: An Integrated Onsite Scientific Literature Annotation Tool Yi Du**1,2,3**∗ , Ludi Wang1, Mengyi Huang1,2, Dongze Song1, Wenjuan Cui1,2, Yuanchun Zhou**1,2,3**∗ 1 Computer Network Information Center, Chinese Academy of Sciences 2 University of Chinese Academy of Sciences 3 University of Science and Technology of China {duyi, wld, myhuang, songdongze, wenjuancui, zyc}@*cnic.cn* ## Abstract Scientific literature is always available in Adobe's Portable Document Format (PDF), which is friendly for scientists to read. Compared with raw text, annotating directly on PDF documents can greatly improve the labeling efficiency of scientists whose annotation costs are very high. In this paper, we present Autodive, an integrated onsite scientific literature annotation tool for natural scientists and Natural Language Processing (NLP) researchers. This tool provides six core functions of annotation that support the whole lifecycle of corpus generation including i)annotation project management, ii)resource management, iii)ontology management, iv)manual annotation, v)onsite auto annotation, and vi)annotation task statistic. Two experiments are carried out to verify efficiency of the presented tool. A live demo of Autodive is available at http://autodive.sciwiki.cn, and a video demo http://autodive.sciwiki.cn/introVideo/introducev1.0.mp4. The source code is available at https://github.com/Autodive. ## 1 Introduction Influential applications such as AlphaFold2 (Jumper et al., 2021), and mobile robotic chemist(Burger et al., 2020) rely on high-quality databases and domain knowledge, some of which are constructed from scientific literature. The traditional method of building such databases still needs the manual annotation of numerous professionals. With the development of scientific research and the enhancement of interdisciplinary integration, the number of scientific articles has increased explosively, which also requires the annotators who are assigned to build the domain-specific database to have a suitable background and are familiar with annotation of literature data. Some research attempts to assist the manual annotation process with the ∗Correspond Author intelligent natural language model, such as Named Entity Recognition (NER) and Relation Identification (RI). The intelligent method can automatically extract knowledge from articles, and form highquality databases after expert proofreading (Cruse et al., 2022; Yan et al., 2022). However, the performance of the general model in specific fields is not satisfactory, and the construction of a specific intelligent model needs high-quality databases. Thus, An easy-to-use annotation tool with a graphical user interface that allows the labeling of text efficiently and consistently is crucial and necessary. Annotation tools play a crucial role during the database-making process in the field of biology(López-Fernández et al., 2013), material(Corvi et al., 2021), and chemistry(Swain and Cole, 2016). Although the majority of released annotation tools mainly focus on the annotation of multimedia such as image and video, there are still text annotation tools such as AnnIE(Friedrich et al., 2021), TSANNO(Stodden and Kallmeyer, 2022) and Doccano(Nakayama et al., 2018). However, most of the tools need to convert the input file from PDF format to plain text, which may cause additional labor costs for resource preparation. Moreover, the annotation of specific data required professionals with knowledge of different fields. Compared with raw text, annotating directly on PDF documents conforms to the reading habits of the professionals with the original images, tables and layout information and can greatly improve the labeling efficiency of them whose annotation costs are very high. We refer to this need as **Onsite Annotation**, which includes the abilities to display and direct annotate on the original PDF documents. To meet the needs of professionals on direct annotation, the new tool should support direct scientific literature annotation in PDF format. That means this tool should display the original literature in PDF format directly, so users can read the complete PDF content in the annotation inter76 face, where they annotate the entity and relation of scientific literature through scheduled annotation operation. In this way, this tool retains the layout information of the original PDF literature to fit the reading habits of annotators. Different from traditional text annotation projects, scientific annotation projects require annotation tools to process long literature, annotate complex entities and export appropriate annotation results to a formed database. Besides onsite annotation, integration is also a valuable factor during the design of the tool. First, the existing domain-specific ontology should be easily integrated which can reduce the time costs of ontology design. Second, the tool should be well integrated with existing and new Named-entity Recognition (NER) models flexibly. Third, integration with existing file systems or literature collections, such as Pubmend Central and self-organized document folders, should also be valued. By integrating with an existing data source, a researcher can deploy this tool locally without leakage of unpublished or copyrighted documents. Hence, we propose an open-sourced annotation tool Autodive, with its contributions summarized below: (1) Onsite Annotation. Autodive supports onsite annotation of PDF documents for professionals. They can annotate directly on PDF documents, and get instant visual feedback. (2) Integration. Autodive can integrated external modules that may assist the whole annotation process, including corpus management, ontology construction, manual and intelligent annotation. (3) Domain Verified. The effectiveness has been verified by two tasks, including catalytic material annotation and scientific dataset annotation. ## 2 Architecture The overall architecture of Autodive is shown in Fig.1 with three layers which are Data Source, Server Layer, and Frontend. The core component of Autodive is the **Server Layer**. Data Source Adapter can integrate specific **Data Sources** such as Pubmed Central or File System with Autodive. Three core server-side engines play important roles. The first is the *Regular Expression Parsing Engine*. It can help Autodive extract entities by predefined regular expressions. The second is the Ontology Management Engine. It can load and parse ontology files with OWL format and save the designed ![1_image_0.png](1_image_0.png) ontology to OWL or self-defined JSON files. The third engine is the *NER Model Management Engine*. It is a model base that is extendable and friendly to newly trained NER models. The annotation engine is the bridge between the server-side and the frontend. This engine provides locating and feedback of entities. **Frontend** is implemented mainly using a progressive JavaScript framework Vue.js. As shown in Fig.2, the complete literature in PDF format is shown on the right part of the annotation page of Autodive, where annotators can read the literature and annotate the entity. Users click the mouse to select words on the displayed PDF document, and click the right mouse to select the type of entities or relationships. When connecting two annotated entities, users can annotate relation of them. Shortcut-key annotation is allowed for the increase of efficiency of the annotation project. All identified or annotated entities and relations are listed in the entity-label-list and relation-label-list with predefined ontology. ## 3 Modules Autodive integrates management, annotation, and optimization through six modules and is specially built for scientific literature annotation projects. Users use the **Project Management** module to develop and manage personalized annotation projects. The **Resource Management** module allows users to manage their own literature resources pool. The Ontology Management module pre-defines the knowledge ontology, which is defined and standardized by the annotation project administrator. The basic components for annotating are the **Manual Annotation** module and the **Auto Annotation** module. Trained automatic annotation models are saved in the background in order to improve the productivity of manual annotation work. Finally, ![2_image_0.png](2_image_0.png) we created a **Statistic View** module for Autodive to represent the overall progress of the annotation project. ## 3.1 Project Management The annotation lifecycle starts with the creation of one annotation project. Autodive designs project management module that includes project creation and annotator administration. As shown in Fig.3(a), users submit necessary information such as project name while creating an annotation project. Domain information is also encouraged to fill out so that the auto-annotation model can accelerate the annotation. There are three different kinds of roles. The creator of the project, also plays as the administrator, has the ability to invite annotators. If the invitation email is approved, invitees will be added straight to the relevant annotation project as administrators or annotators. The administrator also has the rights to assign annotation tasks to project members. ## 3.2 Resource Management For a literature annotation project, the initial and critical step is to control which documents to be annotated and where to find them. The quantity and diversity of literature affect the quality of annotation data, and therefore the accuracy of intelligent models. Autodive allows users to upload and manage their own literature resources pool, and form a list of documents relating different annotation projects by associating the literature resources with them. Besides uploading PDF documents directly, this tool also provides standard API (Application Programming Interface) that can import scientists' own literature resources. After initialing the list of annotation resources, the administrator can assign the annotation resources to other annotators, and the annotators can complete the follow-up task in the form of crowdsourcing, which is as shown in Fig.3(b). ## 3.3 Ontology Management Ontology means what kinds of entities and relations to be annotated in the annotation project, which is defined and controlled by the project administer to fit task requirements. Well defined ontology can enhance the efficiency of annotation. The process of ontology design is shown in Fig.3(c). The first basis component of ontology management is load and design of one ontology. Considering the complexity of the link between entity-labels, the same relation-label category may distribute to numerous entity-labels pairs, and multiple relation-labels may distribute to the same entity-labels pair. The ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_2.png](3_image_2.png) label-map will display everything mentioned above. Autodive also allow users download the designed ontology for subsequent research or application. During the design process of ontology, Autodive allow users upload the corresponding dictionary and fill up the required regular expression. It is advantageous to simplify some annotation projects when the entity-label has a clear thesaurus to-beannotated terms or words that have a unified structure. Two display modes of ontology is shown in Fig.4. ## 3.4 Manual Annotation The manual annotation of an entity in Autodive can select raw text instead of drawing a bounding box in PDF document, this mode is not like PAWLS(Neumann et al., 2021). After selecting, the user decides what kind of entity the text is. This way of entity annotation is more precise, especially in annotating text that has line wrap in document. The manual annotation also provide a controlled annotation of relationship. Annotator select two annotated entity, then Autodive will recommend possible relationship that pre-designed in ontology. This recommendation step can also increase the efficiency of annotation. ## 3.5 Auto Annotation Autodive provides pre-annotation by three ways, they are regular expression parsing, dictionary mapping, and external NER models. It is an evident advantage for dictionary and regular expression annotation since they basically annotate "the proper terms". However, when there are unavoidable outof-vocabulary (OOV) words in the to-be-annotated literature, intelligent model annotation using external NER models is a more effective choice. It is evident that in various scientific research domains, the model of private domain learned the typical information during training, allowing it to accurately recognize the label. Autodive uses the Domain Extractor Loader and the Model Li- ![4_image_0.png](4_image_0.png) brary to choose specific domain prediction models. When starting an annotation project, users should select the corresponding field of the project. In advance, we saved annotation models in several scientific domains in model library. Furthermore, when the project reaches unexcited fields, users are prompted to select a model from the "generic domain" and fill up the domain details. One highlight function of the auto annotation is **Instant Auto Annotation**. As shown in Fig.5, the annotation results can be presented right on the complete text with a single click. To support this function, Autodive constructed a robot annotation layer, and the annotation on the source document is displayed in the same way as the manual annotation results. The text parsing tool will first import the text into the auto annotation model, and the model will return the annotation results. The label will then be filtered according to the ontology established by the project, and it will be matched to the precise spot on the document. The auto annotation model adds a robot annotation layer to the literature display layer. The deep learning model will undoubtedly take some time. Autodive chooses to extract literature in advance during background free time, saves the automatic annotation results in the database, and then performs specific matching annotation based on the model domain and ontology chosen by each announcer, reducing the waiting time of specific users. ## 3.6 Statistic View To help the administrator and annotator know the status and progress of one annotation task, Autodive provides **Personal View** and **Project View** ![4_image_1.png](4_image_1.png) in this module. In Personal View, the number and percentage of current annotation task are provided so that annotator can evaluate his/her task. Besides, the number of annotated and auto-recognized entity and relationship is also shown in the view. In Project View, functions are provided to help administrator understand current progress of all annotators and their assigned tasks, such as the distribution of annotated entity and relationship, number and percentage of each annotator and his/her task, and so on. In Project View, user can export the corpus for further use. ## 4 Evaluation & Case Study 4.1 Annotation Tools Evaluation We compared Autodive with other open sourced annotation tools, including AnnIE(Friedrich et al., 2021), Doccano(Nakayama et al., 2018), WebAnno(Yimam et al., 2013), INCEpTION(Klie et al., 2018), PDFAnno(Shindo et al., 2018) and PAWLS(Neumann et al., 2021), for annotation function comparison. In order to match the need of scientific literature annotation, we design the evaluation metrics as bellows: The first is **[A].Availability**, which includes *[A1].Activity* and *[A2].Online Service*. The second is **[B].Onsite Support**, which includes *[B1].Onsite PDF Display* and [B2].Onsite PDF Annotation. The third is **[C].Function Integration**, which includes *[C1].Integration with* File System, *[C2].Integration with Ontology*, and [C3].Integration with Pre-annotation Model.The last is other functions such as **[D].Team Annotation** and **[E].Statistics**. A deeper description of these metrics is given in Definition of Evaluation Metrics. The comparison results are shown in Tab.1. As shown in Tab.1, Autodive is superior to most active tools in the function of onsite PDF annotation. PDFAnno, PAWLS, and INCEpTION have functions for PDF annotation. However, PDFAnno has not been maintained for over 3 years. Compared with PAWLS, Autodive provides more integration functions with file systems and NER models which also depends on onsite annotation mode. Autodive provides a different integration mode with pre-annotation models and a more intuitive statistics view when compared with the latest version of INCEpTION. | Availability | Onsite | Integration | Team | Statistics | | | | | | |----------------|----------|---------------|--------|--------------|------|------|------|-----|-----| | Tools | [A1] | [A2] | [B1] | [B2] | [C1] | [C2] | [C3] | [D] | [E] | | AnnIE | - | - | - | - | - | ✓ | - | - | - | | Doccano | ✓ | ✓ | - | - | ✓ | ✓ | url | ✓ | ✓ | | WebAnno | - | - | - | - | ✓ | ✓ | - | - | - | | PDFAnno | - | - | ✓ | block | - | - | - | - | - | | PAWLS | - | ✓ | ✓ | block | - | ✓ | ✓ | - | - | | INCEpTION(*) | ✓ | ✓ | ✓ | text | ✓ | ✓ | url | ✓ | - | | Autodive(ours) | ✓(*) | ✓ | ✓ | text | ✓ | ✓ | ✓ | ✓ | ✓ | ## 4.2 Ner Of Cu-Based Electrocatalysts A high-quality corpus of catalysts may assist domain scientists in the discovery of catalysts based on a descriptor-optimization (Tran and Ulissi, 2018; Zhong et al., 2020). In this case, a corpus of Cubased electrocatalysts for CO2 reduction has been generated using the presented tool. At the beginning of the process, one senior scientist creates an annotation project and served as the administrator of the project by **Project Management Module**. After creating the project, the scientist finds the literature that needs annotation and assigns the literature to potential annotators using **Resource Management Module**. At the same time, he designs the ontology of Cu-based electrocatalysts with this assist of **Ontology Management Module** . There are 5 postgraduates with experience in experimental catalysis who used the tool to construct the corpus using **Manual Annotation Module** and **Auto Annotation Module**. During the annotation process, the administrator and annotators can view the rate of progress at any time by **Statistic View**. After annotation, the senior scientist exports the corpus and review all the annotated entity and relationship. In this real case, the corpus contains a collection of 6,086 records extracted from 835 publications with nine types of knowledge, including material, regulation method, product, faradaic efficiency, cell setup, electrolyte, synthesis method, current density, and voltage. This annotated corpus can be accessed publicly(Wang, 2023)(Wang et al., 2023). ## 4.3 **Auto Annotation Of Ai Dataset And Model** In this case, we try to demonstrate the ability of auto annotation. A basic auto annotation project requires related model prepared and continuous annotation data. In order to analyse the effect of Auto Annotation Module quantitatively, we designed an experiment with a poorly correlated public dataset and increased proportion of annotated data to evaluate the correctness of auto annotation, as in a real annotation project. Firstly we trained an annotation model using the SciERC(Luan et al., 2018) dataset that mainly focused on the field of artificial intelligence. After deploying the model to the Autodive backend, we chose a number of abstracts from publications in the field of artificial intelligence on paperswithcode.com to simulate the automatic annotation effect. We designed ontology with "Model" and "Dataset" entities. All data contains 7,420 "Datasets" entities and 42,696 "Model" entities. Training and updating the NER model during the annotation process helps fit the annotation project and improve the correctness. Tab.2 displays the results of this simulation. The zero-shot shows the correctness without any "annotation" data. With the updates of the auto annotation model through increasing sample size, the increased correctness of auto annotation module shows effectiveness of Auto Annotation Module, which helps annotators train their auto annotation models. As we can see, well integrated auto annotation model might meet the needs of scientific literature annotation, and it may perform better in specific projects. ## 5 Related Work Mariana Neves(Neves and Ševa, 2021) gave a comprehensive review of existing document annotation tools. It splits the criteria of document annotation ![6_image_0.png](6_image_0.png) Table 2: Experiment Result tools into four categories, which are publication, technical, data, and functional. In this review, it rates WebAnno(Yimam et al., 2013) as the best tool, which also extends to a new human-in-the-loop tool INCEpTION(Klie et al., 2018). It also mentioned that the top two missing functions of current tools are the support of document-level annotation, integration with existing corpus and pre-annotation, especially model-based pre-annotation. Kinds of new annotation tools are reported in recent years. Many tools focus on specific tasks or functions, such as TeamTat(Islamaj et al., 2020), QuickGraph(Bikaun et al., 2022), DoTAT(Lin et al., 2022) and FAST(Kawamoto et al., 2021). Both task specified and common document annotation tools such as WebAnno(Yimam et al., 2013), Doccano(Nakayama et al., 2018), AnnIE(Friedrich et al., 2021) and TS-ANNO(Stodden and Kallmeyer, 2022) need a pre-process that convert document to pure text, which is a timeconsuming work. By consulting with domain experts, the converter process also causes confuses reading, especially in the typeset document such as scientific literature. As for annotation tools for PDF documents, PDFAnno(Shindo et al., 2018) and PAWLS(Neumann et al., 2021) are the two most relevant tools with our present tool. PDFAnno converts PDF documents into pure text without retaining PDF structure information, whose annotation mode is similar to our tool. However, it has not been maintained for over 3 years. PAWLS is a recent tool that supports PDF annotation with labels and structure. It has the advantage of annotation the meta or structural information by drawing a bounding box rather than selecting raw text. Autodive is inspired by PAWLS in the requirement of PDF annotation and surpasses it in integration function and annotation mode. ## 6 Discussion We created Autodive, a collaborative scientific literature annotation tool that offers a comprehensive solution for the whole lifecycle of annotation, especially for scientific literature annotation. In addition, we provide automated annotation for annotators and can integrated with a variety of NER models. We found that Autodive can not only be utilized for scientific literature, but also for any editable PDF file. Also, Autodive is released as an open source project under Apache 2.0 license. Autodive also has some limitations. First, accuracy of NER models. For text annotation projects, a more accurate NER model can ensure the accuracy of auto annotation. Second, collision in teamwork. Reduce annotation disputes in annotation projects caused by diverse knowledge perspectives between annotators and obtain more accurate annotation data. Third, there are PDF files that cannot be changed. Some obsolete or illegible PDF documents are difficult to process and cannot be used in the current Autodive version. In our future project, we intend to expend the scope of literature annotation to include graphics and tables in addition to text. Additionally, autodive will support more file types, including the ability to download in JSON/CoNLL format and to upload plain text and pictures. ## Acknowledgements We thank all anonymous reviewers. This work is supported by the National Key Research and Development Plan of China under Grant No. 2022YFF0712200 and 2022YFF0711900, the Natural Science Foundation of China under Grant No. 61836013, Beijing Natural Science Foundation under Grant No. 4212030, Youth Innovation Promotion Association CAS. ## References Tyler Bikaun, Michael Stewart, and Wei Liu. 2022. Quickgraph: a rapid annotation tool for knowledge graph extraction from technical text. In *Proceedings* of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278. Benjamin Burger, Phillip M Maffettone, Vladimir V Gusev, Catherine M Aitchison, Yang Bai, Xiaoyan Wang, Xiaobo Li, Ben M Alston, Buyi Li, Rob Clowes, et al. 2020. A mobile robotic chemist. *Nature*, 583(7815):237–241. Javier Corvi, Carla Fuenteslópez, José Fernández, Josep Gelpi, Maria-Pau Ginebra, Salvador CapellaGuitierrez, and Osnat Hakimi. 2021. The biomaterials annotator: a system for ontology-based concept annotation of biomaterials text. In Proceedings of the Second Workshop on Scholarly Document Processing, pages 36–48. Kevin Cruse, Amalie Trewartha, Sanghoon Lee, Zheren Wang, Haoyan Huo, Tanjin He, Olga Kononova, Anubhav Jain, and Gerbrand Ceder. 2022. Textmined dataset of gold nanoparticle synthesis procedures, morphologies, and size entities. *Scientific* Data, 9(1):1–12. Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, and Goran Glavaš. 2021. Annie: an annotation platform for constructing complete open information extraction benchmark. arXiv preprint arXiv:2109.07464. Rezarta Islamaj, Dongseop Kwon, Sun Kim, and Zhiyong Lu. 2020. Teamtat: a collaborative text annotation tool. *Nucleic acids research*, 48(W1):W5–W11. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. 2021. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589. Shunyo Kawamoto, Yu Sawai, Kohei Wakimoto, and Peinan Zhang. 2021. Fast: fast annotation tool for smart devices. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing: System Demonstrations, pages 372–381. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The inception platform: Machine-assisted and knowledge-oriented interactive annotation. In proceedings of the 27th international conference on computational linguistics: system demonstrations, pages 5–9. Yupian Lin, Tong Ruan, Ming Liang, Tingting Cai, Wen Du, and Yi Wang. 2022. Dotat: a domain-oriented text annotation tool. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics: System Demonstrations, pages 1–8. Hugo López-Fernández, Miguel Reboiro-Jato, Daniel Glez-Peña, Fernando Aparicio, Diego Gachet, Manuel Buenaga, and Florentino Fdez-Riverola. 2013. Bioannote: a software platform for annotating biomedical documents with application in medical learning environments. *Computer methods and programs in biomedicine*, 111(1):139–147. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language* Processing, pages 3219–3232. Hiroki Nakayama, Takahiro Kubo, Junya Kamura, Yasufumi Taniguchi, and Xu Liang. 2018. doccano: text annotation tool for human. Software available from https://github.com/doccano/doccano. Mark Neumann, Zejiang Shen, and Sam Skjonsberg. 2021. Pawls: Pdf annotation with labels and structure. In *Proceedings of the 59th Annual Meeting of* the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, page 258–264. Mariana Neves and Jurica Ševa. 2021. An extensive review of tools for manual annotation of documents. Briefings in bioinformatics, 22(1):146–163. Hiroyuki Shindo, Yohei Munesada, and Yuji Matsumoto. 2018. Pdfanno: a web-based linguistic annotation tool for pdf documents. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Regina Stodden and Laura Kallmeyer. 2022. TSANNO: an annotation tool to build, annotate and evaluate text simplification corpora. In *Proceedings of the 60th Annual Meeting of the Association* for Computational Linguistics: System Demonstrations, pages 145–155, Dublin, Ireland. Association for Computational Linguistics. Matthew C Swain and Jacqueline M Cole. 2016. Chemdataextractor: a toolkit for automated extraction of chemical information from the scientific literature. *Journal of chemical information and modeling*, 56(10):1894–1904. Kevin Tran and Zachary W Ulissi. 2018. Active learning across intermetallics to guide discovery of electrocatalysts for co2 reduction and h2 evolution. Nature Catalysis, 1(9):696–703. Ludi Wang. 2023. A corpus of CO2 Electrocatalytic Reduction Process extracted from the scientific literature. Ludi Wang, Yang Gao, Xueqing Chen, Wenjuan Cui, Yuanchun Zhou, Xinying Luo, Shuaishuai Xu, Yi Du, and Bin Wang. 2023. A corpus of co2 electrocatalytic reduction process extracted from the scientific literature. *Scientific Data*, 10(1):175. Rongen Yan, Xue Jiang, Weiren Wang, Depeng Dang, and Yanjing Su. 2022. Materials information extraction via automatically generated corpus. Scientific Data, 9(1):1–12. Seid Muhie Yimam, Iryna Gurevych, Richard Eckart de Castilho, and Chris Biemann. 2013. Webanno: a flexible, web-based and visually supported system for distributed annotations. In *Proceedings of the 51st* Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 1–6. Miao Zhong, Kevin Tran, Yimeng Min, Chuanhao Wang, Ziyun Wang, Cao-Thang Dinh, Phil De Luna, Zongqian Yu, Armin Sedighian Rasouli, Peter Brodersen, et al. 2020. Accelerated discovery of co2 electrocatalysts using active machine learning. Nature, 581(7807):178–183. ## A Definition Of Evaluation Metrics B.2 Resource Management ![8_Image_0.Png](8_Image_0.Png) The definition of evaluation metrics among kinds of annotation tools are shown belows. [A] Availability. **[A1] Activeness** : The annotation tool is still active and updated steadily. **[A2]** Code Availability: The annotation tool is open sourced. [B] Onsite Support. **[B1] Onsite PDF Display**: The annotation tool provides a complete display with the structural information of the literature to suit the reading habits of scientific annotators. **[B2]** Onsite PDF Annotation: The annotation tool provides direct annotation on PDF documents, including **Text PDF Annotation** (annotate text directly) and **Block PDF Annotation** (annotate by drawing frames). [C] Function Integration. **[C1] Integration** With File System (Resource Management): The annotation tools has a file system which allows resource management such as uploading files. [C2] Integration With Ontology: The annotation tool enables the definition of ontology such as knowledge graph. **[C3] Integration With PreAnnotation Model**: The annotation tool offers preannotation such as dictionary matching. [D] Team Annotation: The annotation tool enables team annotation and the management of annotation results for all. [E] Statistics: The annotation tool provides statistic view of project. ## B Page Design B.1 Project Management ![8_Image_3.Png](8_Image_3.Png) ![8_image_1.png](8_image_1.png) ## B.3 Ontology Management ![8_Image_2.Png](8_Image_2.Png) B.4 Annotation ![9_Image_0.Png](9_Image_0.Png) B.5 Statistic View ![9_Image_1.Png](9_Image_1.Png) ![9_image_2.png](9_image_2.png) ## C Demo Access A live demo of Autodive is available at http://autodive.sciwiki.cn. The live demo provides two languages, English and simplified Chinese, which depends on the language setting of the web browser. It is allowed to convert language via the "head" button in the top right corner. We also provide a video demo at http://autodive.sciwiki.cn/introVideo/introducev1.0.mp4. The source code is available at https://github.com/Autodive. We provide a test account in the live demo using username *test* and password *autodive*. In this demo, we linked a resource library with dozens of open access (OA) scientific literature. To use Autodive in production environment, users can also sign up with their own email address, upload their own literature that needs annotation, create personalized annotation project, assign an annotator, and complete their annotation task. Users can also deploy Autodive in their own server with personal literature collections.
ushio-etal-2023-practical
A Practical Toolkit for Multilingual Question and Answer Generation
https://aclanthology.org/2023.acl-demo.8
Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG along with lmqg, an all-in-one python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation.
# A Practical Toolkit For Multilingual Question And Answer Generation Asahi Ushio and **Fernando Alva-Manchego** and **Jose Camacho-Collados** Cardiff NLP, School of Computer Science and Informatics, Cardiff University, UK {UshioA,AlvaManchegoF,CamachoColladosJ}@cardiff.ac.uk ## Abstract ![0_Image_0.Png](0_Image_0.Png) Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG, along with lmqg, an allin-one Python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation. ## 1 Introduction Question and answer generation (QAG) is a text generation task seeking to output a list of questionanswer pairs based on a given paragraph or sentence (i.e. the context). It has been used in many NLP applications, including unsupervised question answering modeling (Lewis et al., 2019; Zhang and Bansal, 2019; Puri et al., 2020), fact-checking (Ousidhoum et al., 2022), semantic role labeling (Pyatkin et al., 2021), and as an educational tool (Heilman and Smith, 2010; Lindberg et al., 2013). The most analysed setting in the literature, however, has been question generation (QG) with predefined answers, as this simplifies the task and makes the evaluation more straightforward. Despite its versatility, QAG remains a challenging task due to the difficulty of generating compo- Figure 1: An example of question and answer generation given a paragraph as context. sitional outputs containing a list of question and answer pairs as shown in Figure 1, with recent works mainly relying on extended pipelines that include several ad-hoc models (Lewis et al., 2021; Bartolo et al., 2021). These works integrate QAG into their in-house software, preventing models to be publicly released, and their complex pipelines make them hard to reproduce and use by practitioners. In this paper, we introduce an open set of software tools and resources to assist on the development and employment of QAG models for different types of users. We publicly release the following resources:1 - lmqg, 2a Python package for QAG model finetuning and inference on encoder-decoder language models (LMs), as well as evaluation scripts, and a deployment API hosting QAG models for developers; 86 - 16 models for English, and three diverse models for each of the seven languages integrated into our library, all fine-tuned on QG-Bench (Ushio et al., 2022) and available on the HuggingFace hub (Wolf et al., 2020);3 - *AutoQG* (https://autoqg.net), a website where developers and end users can interact with our multilingual QAG models. ## 2 Resources: Models And Datasets Our QAG toolkit makes use of pre-existing models and datasets, fully compatible with the HuggingFace hub. This makes our library easily extendable in the future as newer datasets and better models emerge. In this section, we describe the datasets (§ 2.1) and models (§ 2.2) currently available through lmqg and AutoQG. ## 2.1 Multilingual Datasets Our toolkit integrates all QG datasets available in QG-Bench (Ushio et al., 2022). QG-Bench is a multilingual QG benchmark consisting of a suite of unified QG datasets in different languages. In particular, we integrate the following datasets: SQuAD (English), SQuADShifts (Miller et al., 2020) (English), SubjQA (Bjerva et al., 2020) (English), JAQuAD (So et al., 2022) (Japanese), GerQuAD (Möller et al., 2021) (German), SberQuAd (Efimov et al., 2020) (Russian), KorQuAD (Lim et al., 2019) (Korean), FQuAD (d'Hoffschmidt et al., 2020) (French), Spanish SQuAD (Casimiro Pio et al., 2019) (Spanish), and Italian SQuAD (Croce et al., 2018) (Italian). QG-Bench is available through our official lmqg HuggingFace project page and GitHub4. ## 2.2 Models Aiming to make QAG models publicly accessible in several languages, we used lmqg to fine-tune LMs using QG-Bench (§ 2.1). First, we defined a pipeline QAG model architecture consisting of two independent models: one for answer extraction (AE) and one for question generation (QG). During training, the AE model learns to find an answer in each sentence of a given paragraph, while the QG model learns to generate a question given an answer from a paragraph. To generate questionanswer pairs at generation time, the AE model first extracts answers from all the sentences in a given paragraph, and then these are used by the QG model to generate a question for each answer. While not directly evaluated in this paper, we also integrated other types of QAG methods such as multitask and end2end QAG (Ushio et al., 2023), all available via the lmqg library (§ 3) as well as AutoQG (§ 5). As pre-trained LMs, we integrated T5 (Raffel et al., 2020), Flan-T5 (Chung et al., 2022), and BART (Lewis et al., 2020) for English; and mT5 (Xue et al., 2021) and mBART (Liu et al., 2020) for non-English QAG models. The pre-trained weights were taken from checkpoints available in the HuggingFace Hub as below: - t5-{small,base,large} - google/flan-t5-{small,base,large} - facebook/bart-{base,large} - google/mt5-{small,base} - facebook/mbart-large-cc25 All the fine-tuned QAG models are publicly available in our official HuggingFace Hub. While we initially integrated these models, users can easily fine-tune others using lmqg, as we show in § 3. ## 3 Lmqg**: An All-In-One Qag Toolkit** In this section, we introduce lmqg (Language Model for Question Generation), a Python library for fine-tuning LMs on QAG (§ 3.1), generating question-answer pairs (§ 3.2), and evaluating QAG models (§ 3.3). Additionally, with lmqg, we build a REST API to host QAG models to generate question and answer interactively (§ 5). lmqg is interoperable with the HuggingFace ecosystem, as it can directly make use of the datasets and models already shared on the HuggingFace Hub. ## 3.1 Qag Model Fine-Tuning Fine-tuning is performed via GridSearcher, a class to run encoder-deocoder LM fine-tuning with hyper-parameter optimization (see Appendix A for more details). For example, the following code shows how we can fine-tune T5 (Raffel et al., 2020) on SQuAD (Rajpurkar et al., 2016), with the QAG model explained in § 2.2. Since we decomposed QAG into AE and QG, two models need to be finetuned independently. from lmqg **import** GridSearcher # instantiate AE trainer trainer_a = GridSearch( dataset_path="lmag/ag_squared", input_types="paragrapph_sentence", output_types="answer", model="t5-large") # train AE model # instantiate Q6trainer trainer_ag = GridSearch( dataset_path="lmag/ag_squared", input_types="paragrapph_answer", output_types="question", model="t5-large") # train Q6 model # train Q6 model # train a The corresponding dataset, lmqg/qg_squad, 5 has as columns: paragraph_answer (answerhighlighted paragraph), paragraph_sentence (sentence-highlighted paragraph), question (target question), and answer (target answer). The input and the output to the QG model are paragraph_answer and question, while those to the AE model are paragraph_sentence and answer. The inputs and the outputs can be specified by passing the name of each column in the dataset to the arguments, input_types and output_types when instantiating GridSearcher. ## 3.2 Qag Model Generation In order to generate question-answer pairs from a fine-tuned QAG model, lmqg provides the TransformersQG class. It takes as input a path to a local model checkpoint or a model name on the HuggingFace Hub in order to generate predictions in a single line of code. The following code snippet shows how to generate a list of question and answer pairs with the fine-tuned QAG model presented in § 2.2. TransformersQG decides which model to use for each of AE and QG based on the arguments model_ae and model. ## 1 Introduction The _Herschel_ satellite ([http://www.chm.org/](http://www.chm.org/)) is a very powerful instrument for studying the properties of the atmosphere. The _Herschel_ satellite is a very powerful instrument for studying the properties of the atmosphere. ) \# input paragraph x = """ William Turner was an English painter who specialised in watercolour landscapes . One of his best known pictures is a view of the city of 5https://huggingface.co/datasets/lmqg/qg_squad Oxford from Hinksey Hill .""" # generation model . generate_qa ( x ) [ ( "Who was an English painter specialised in watercolour landscapes ?", " William Turner " ) , ( " Where is William Turner 's view of Oxford ?", " Hinksey Hill ." ) ] ## 3.3 Qag Model Evaluation Similar to other text-to-text generation tasks, we implement an evaluation mechanism that compares the set of generated question-answer pairs Q˜p = {(˜q 1, a˜ 1),(˜q 2, a˜ 2)*, . . .* } to a reference set of gold question-answer pairs Qp = {(q 1, a1),(q 2, a2)*, . . .* } given an input paragraph p. Let us define a function to evaluate a single question-answer pair to its reference pair as $d_{q,a,\tilde{q},\tilde{a}}=s(t(q,a),t(\tilde{q},\tilde{a}))$ $t(q,a)=$"question:$\{q\}$, answer:$\{a\}$" (1) (2) $\frac{1}{2}$ where s is a reference-based metric, and we compute the F1 score as the final metric as below: $$\begin{array}{l}{{F_{1}=2\frac{R\cdot P}{R+P}}}\\ {{R=\operatorname*{mean}\Bigl(\Bigl[\operatorname*{max}_{(q,a)\in\mathcal{Q}_{c}}\bigl(d_{q,a,\tilde{q},\tilde{a}}\bigr)\Bigr]_{(\tilde{q},\tilde{a})\in\tilde{\mathcal{Q}}_{c}A}\Bigr)}}\\ {{P=\operatorname*{mean}\Bigl(\Bigl[\operatorname*{max}_{(\tilde{q},\tilde{a})\in\tilde{\mathcal{Q}}_{c}}\bigl(d_{q,a,\tilde{q},\tilde{a}}\bigr)\Bigr]_{(q,a)\in\mathcal{Q}_{c}A}\Bigr)}}\end{array}$$ (4) (5) Conceptually, the recall (4) and precision (5) computations attempt to "align" each generated question-answer pair to its "most relevant" reference pair. As with traditional precision and recall metrics, precision is aimed at evaluating whether the predicted question-answer pairs are *correct* (or in this case, aligned with the reference questionanswer pairs), and recall tests whether there are enough high-quality question-answer pairs. Thus, we refer to the score in (3) as the **QAAligned F1** score. The quality of the alignment directly depends on the underlying metric s. Furthermore, the complexity of QAAligned is no more than the complexity of the underlying metric, and invariant to the order of generated pairs because of the alignment at computing recall and precision. Out-of-the-box, lmqg implements two variants based on the choice of base_metric s (used for evaluation in § 4): QAAligned BS using BERTScore (Zhang et al., 2019) and QAAligned MS using MoverScore (Zhao et al., 2019). We selected these two metrics as they correlate well with human judgements in QG (Ushio et al., 2022). Nevertheless, the choice of base_metric is flexible and users can employ other natural language generation (NLG) evaluation metrics such as BLEU4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), or ROUGEL (Lin, 2004). With lmqg, QAAligned score can be computed with the QAAlignedF1Score class as shown in the code snippet below: from lmqg **import** QAAlignedF1Score \# gold reference and generation ref = [ " question : What makes X? , answer : Y", " question : Who made X? , answer : Y"] pred = [ " question : What makes X? , answer : Y", " question : Who build X? , answer : Y", " question : When X occurs ? , answer : Y"] # compute QAaligned BS scorrer = QAalignedF1Score( base_metric="bertscore") scorrer.get_score(pred, ref) # compute QAaligned MS scorrer = QAalignedF1Score( base_metric="moverscore" ) scorrer.get_score(pred, ref) ## Evaluation We rely on the QAG models and datasets included in the library (see § 2). The individual QG components of each model (i.e. the generation of a question given an answer in a paragraph) were extensively evaluated in Ushio et al. (2022). For this evaluation, therefore, we focus on the quality of the predicted questions and answers given a paragraph (i.e. the specific answer is not pre-defined). For each model, we fine-tune, make predictions and compute their QAAligned scores via lmqg. ## 4.1 Results Monolingual evaluation (English). Table 1 presents the test results on SQuAD for seven English models based on BART, T5 and Flan-T5. The QAG model based on BARTLARGE proves to be the best aligned with gold reference question and answers among most of the metrics. As with other | Model | QAAligned BS | QAAligned MS | |---------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|----------------| | BARTBASE | 92.8 / 93.0 / 92.8 64.2 / 64.1 / 64.5 | | | BARTLARGE | 93.2 / 93.4 / 93.1 64.8 / 64.6 / 65.0 | | | T5SMALL | 92.3 / 92.5 / 92.1 63.8 / 63.8 / 63.9 | | | T5BASE | 92.8 / 92.9 / 92.6 64.4 / 64.4 / 64.5 | | | T5LARGE | 93.0 / 93.1 / 92.8 64.7 / 64.7 / 64.9 | | | Flan-T5SMALL 92.3 / 92.1 / 92.5 63.8 / 63.8 / 63.8 Flan-T5BASE 92.6 / 92.5 / 92.8 64.3 / 64.4 / 64.3 Flan-T5LARGE 92.7 / 92.6 / 92.9 64.6 / 64.7 / 64.5 | | | | Language | QAAligned BS | QAAligned MS | |------------|--------------------|--------------------| | German | 81.2 / 80.0 / 82.5 | 54.3 / 54.0 / 54.6 | | Spanish | 79.9 / 77.5 / 82.6 | 54.8 / 53.3 / 56.5 | | French | 79.7 / 77.6 / 82.1 | 53.9 / 52.7 / 55.3 | | Italian | 81.6 / 81.0 / 82.3 | 55.9 / 55.6 / 56.1 | | Japanese | 79.8 / 76.8 / 83.1 | 55.9 / 53.8 / 58.2 | | Korean | 80.5 / 77.6 / 83.8 | 83.0 / 79.4 / 87.0 | | Russian | 77.0 / 73.4 / 81.1 | 55.5 / 53.2 / 58.3 | | German | 76.9 / 76.3 / 77.6 | 53.0 / 52.9 / 53.1 | | Spanish | 80.8 / 78.5 / 83.3 | 55.3 / 53.7 / 57.0 | | French | 68.6 / 67.6 / 69.7 | 47.9 / 47.4 / 48.4 | | Italian | 81.7 / 81.3 / 82.2 | 55.8 / 55.7 / 56.0 | | Japanese | 80.3 / 77.1 / 83.9 | 56.4 / 54.0 / 59.1 | | Korean | 77.3 / 76.4 / 78.3 | 77.5 / 76.3 / 79.0 | | Russian | 77.0 / 73.4 / 81.2 | 55.6 / 53.3 / 58.4 | | German | 0 / 0 / 0 | 0 / 0 / 0 | | Spanish | 79.3 / 76.8 / 82.0 | 54.7 / 53.2 / 56.4 | | French | 75.6 / 74.0 / 77.2 | 51.8 / 51.0 / 52.5 | | Italian | 40.1 / 40.4 / 39.9 | 27.8 / 28.1 / 27.5 | | Japanese | 76.7 / 74.8 / 78.9 | 53.6 / 52.3 / 55.1 | | Korean | 80.6 / 77.7 / 84.0 | 82.7 / 79.0 / 87.0 | | Russian | 79.1 / 75.9 / 82.9 | 56.3 / 54.0 / 58.9 | mT5SMALL German 81.2 / 80.0 / 82.5 54.3 / 54.0 / **54.6** Spanish 79.9 / 77.5 / 82.6 54.8 / 53.3 / 56.5 French 79.7 / 77.6 / 82.1 53.9 / 52.7 / **55.3** Italian 81.6 / 81.0 / **82.3 55.9** / 55.6 / **56.1** Japanese 79.8 / 76.8 / 83.1 55.9 / 53.8 / 58.2 Korean 80.5 / 77.6 / 83.8 83.0 / 79.4 / **87.0** Russian 77.0 / 73.4 / 81.1 55.5 / 53.2 / 58.3 mT5BASE German 76.9 / 76.3 / 77.6 53.0 / 52.9 / 53.1 Spanish 80.8 / 78.5 / 83.3 55.3 / 53.7 / **57.0** French 68.6 / 67.6 / 69.7 47.9 / 47.4 / 48.4 Italian 81.7 / **81.3** / 82.2 55.8 / **55.7** / 56.0 Japanese 80.3 / 77.1 / 83.9 56.4 / 54.0 / **59.1** Korean 77.3 / 76.4 / 78.3 77.5 / 76.3 / 79.0 Russian 77.0 / 73.4 / 81.2 55.6 / 53.3 / 58.4 mBART German 0 / 0 / 0 0 / 0 / 0 Spanish 79.3 / 76.8 / 82.0 54.7 / 53.2 / 56.4 French 75.6 / 74.0 / 77.2 51.8 / 51.0 / 52.5 Italian 40.1 / 40.4 / 39.9 27.8 / 28.1 / 27.5 Japanese 76.7 / 74.8 / 78.9 53.6 / 52.3 / 55.1 Korean 80.6 / 77.7 / **84.0** 82.7 / 79.0 / **87.0** Russian 79.1 / 75.9 / 82.9 56.3 / 54.0 / **58.9** ![3_image_0.png](3_image_0.png) QG experiments and NLP in general, the larger models prove more reliable. Multilingual evaluation. Table 2 shows the test results of three multilingual models (mBART, mT5SMALL and mT5BASE) in seven languages other than English, using their corresponding languagespecific SQuAD-like datasets in QG-Bench for finetuning and evaluation.6In this evaluation, no single LM produces the best results across the board, yet QAG models based on mT5SMALL and mT5BASE are generally better than those based on mBART. 6The result of mBART in German is zero. Upon further inspection, we found that the fine-tuned answer extraction module did not learn properly, probably due to the limited size of the German dataset. T5 models, however, proved more reliable in this case. ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Table 3: Average number of generated question and answer pairs per paragraph on the test set of SQuAD by different QAG models. | Language | Gold | mT5SMALL | mT5BASE | mBART | |------------|--------|------------|-----------|---------| | German | 4.6 | 10.1 | 8.4 | 0.0 | | Spanish | 1.3 | 4.6 | 4.8 | 4.7 | | French | 1.3 | 4.9 | 3.6 | 5.4 | | Italian | 3.8 | 4.7 | 4.6 | 2.5 | | Japanese | 1.3 | 6.6 | 6.8 | 3.6 | | Korean | 1.3 | 6.7 | 6.3 | 6.7 | | Russian | 1.3 | 4.8 | 4.9 | 4.7 | ## 4.2 Number Of Generated Questions And Answers Table 3 and Table 4 show the averaged number of generated question-answer pairs and compare it to the number in the gold dataset. For English, there is a small difference across all QAG models, with all generating fewer pairs than the gold dataset, but with a limited margin. For other languages, however, there are clear differences across QAG models, with the numbers of question-answer pairs generated by the QAG models always being larger than those in the gold dataset. When comparing the number of pairs generated by the QAG models with their QAAligned scores, in languages such as German, Spanish, and Korean, QAG models that generated a larger number question-answer pairs achieved higher scores, not only recall-wise but also generally for F1. ## 5 Autoqg Finally, we present AutoQG (https://autoqg.net), an online QAG demo where users can generate question-answer pairs for texts in eight languages (English, German, Spanish, French, Italian, Japanese, Korean, Russian) by simply providing a context document. We deploy the QAG models described in § 2. In addition to the features described above, the online demo shows perplexity computed via lmppl, 7a Python library to compute perplexity given any LM architecture. This feature helps us provide a ranked list of generation to the user. Although we can compute perplexity for non-English 7https://pypi.org/project/lmppl generations based on the QAG models in each language, it entails large memory requirements on the the hosting server. As such, we compute a lexical overlap between the question and the document as a computationally-light alternative to the perplexity, which is defined as: $$1-{\frac{|q\cap p|}{|q|}}\qquad\qquad\qquad(6)$$ where *| · |* is the number of characters in a string, and q ∩ p is the longest sub-string of the question q matched to the paragraph p. Figure 2 and Figure 3 show examples of the interface with English and Japanese QAG, where there is a tab to select QAG models, language, and parameters at generation including the beam size and the value for nucleus sampling (Holtzman et al., 2020). Optionally, users can specify an answer and generate a single question on it with the QG model, as shown in Figure 4. A short introduction video to AutoQG is available at https://youtu.be/ T6G-D9JtYyc. ## 6 Conclusion In this paper, we introduced lmqg, a Python package to fine-tune, evaluate and deploy QAG models with a few lines of code. The library implements the QAG task as an efficient integration of answer ![5_image_0.png](5_image_0.png) ![5_image_1.png](5_image_1.png) extraction and question generation, and includes automatic reference-based metrics for model evaluation. Finally, we showcase AutoQG, an online demo where end users can benefit from QAG models without any programming knowledge. AutoQG enables the selection of features going from different models and languages to controlling the diversity of the generation. ## Limitations The focus on this paper was introducing software to make QAG models available to as many practitioners as possible, but there are a couple of limitations in the models and evaluation metrics we proposed. First, our released QAG models assume a paragraph up to around 500 tokens as an input, and longer documents can not be directly fed into the models. Additionally, the released QAG models were fine-tuned on questions that require one-hop reasoning only, so they are unable to generate multihop reasoning. Second, the QAAligned score is a framework to extend any NLG metric to match the prediction to the reference when they are different in size, where we employed two well-established metrics (BERTScore and MoverScore) as underlying metrics. Since those underlying metrics are already proven to be effective (Zhang et al., 2019; Zhao et al., 2019; Ushio et al., 2022), we have not conducted any human annotation for QAG specifically. Nonetheless, an extended human evaluation could help provide more insights on other limitations of the model not detected by the automatic evaluation. ## Ethics Statement While the QAG models are fine-tuned on pretrained language models, which are known to contain some toxic contents (Schick et al., 2021), an internal check does not reveal any toxic generation. However, there is a potential risk that the QAG model could generate toxic text due to the underlying LMs. ## References Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela. 2021. Improving question answering model robustness with synthetic adversarial data generation. In *Proceedings* of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8830–8848, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Johannes Bjerva, Nikita Bhutani, Behzad Golshan, Wang-Chiew Tan, and Isabelle Augenstein. 2020. SubjQA: A Dataset for Subjectivity and Review Comprehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5480–5494, Online. Association for Computational Linguistics. Carrino Casimiro Pio, Costa-jussa Marta R., and Fonollosa Jose A. R. 2019. Automatic Spanish Translation of the SQuAD Dataset for Multilingual Question Answering. *arXiv e-prints*, page arXiv:1912.05200v1. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Danilo Croce, Alexandra Zelenanska, and Roberto Basili. 2018. Neural learning for question answering in italian. In *AI*IA 2018 - Advances in Artificial* Intelligence, pages 389–402, Cham. Springer International Publishing. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In *Proceedings of the Ninth* Workshop on Statistical Machine Translation, pages 376–380, Baltimore, Maryland, USA. Association for Computational Linguistics. Martin d'Hoffschmidt, Wacim Belblidia, Quentin Heinrich, Tom Brendlé, and Maxime Vidal. 2020. FQuAD: French question answering dataset. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1193–1208, Online. Association for Computational Linguistics. Pavel Efimov, Andrey Chertok, Leonid Boytsov, and Pavel Braslavski. 2020. Sberquad–russian reading comprehension dataset: Description and analysis. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 3–15. Springer. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In *Human Language Technologies: The 2010 Annual Conference of the North American Chapter of* the Association for Computational Linguistics, pages 609–617, Los Angeles, California. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics*, pages 7871–7880, Online. Association for Computational Linguistics. Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pages 4896–4910, Florence, Italy. Association for Computational Linguistics. Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 million probably-asked questions and what you can do with them. *Transactions of the Association for Computational Linguistics*, 9:1098–1115. Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019. Korquad1. 0: Korean qa dataset for machine reading comprehension. *arXiv preprint arXiv:1909.07005*. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In *Text Summarization Branches Out*, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114, Sofia, Bulgaria. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. *Transactions of the Association for Computational Linguistics*, 8:726–742. John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. 2020. The effect of natural distribution shift on question answering models. In International Conference on Machine Learning, pages 6905–6916. PMLR. Timo Möller, Julian Risch, and Malte Pietsch. 2021. Germanquad and germandpr: Improving non-english question answering and passage retrieval. Nedjma Ousidhoum, Zhangdie Yuan, and Andreas Vlachos. 2022. Varifocal question generation for factchecking. In *Proceedings of the 2022 Conference on* Empirical Methods in Natural Language Processing, pages 2532–2544, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, and Bryan Catanzaro. 2020. Training question answering models from synthetic data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5811–5826, Online. Association for Computational Linguistics. Valentina Pyatkin, Paul Roit, Julian Michael, Yoav Goldberg, Reut Tsarfaty, and Ido Dagan. 2021. Asking it all: Generating contextualized questions for any semantic role. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing*, pages 1429–1441, Online and Punta Cana, ![7_image_0.png](7_image_0.png) Dominican Republic. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21:1– 67. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In *Proceedings of* the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in NLP. *Transactions of the* Association for Computational Linguistics, 9:1408– 1424. ByungHoon So, Kyuhong Byun, Kyungwon Kang, and Seongjin Cho. 2022. Jaquad: Japanese question answering dataset for machine reading comprehension. arXiv preprint arXiv:2202.01764. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2022. Generative language models for paragraph-level question generation. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, Abu Dhabi, U.A.E. Association for Computational Linguistics. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2023. An empirical comparison of lm-based question and answer generation methods. In Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics, Toronto, Canada. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical* Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In *Proceedings* of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509, Hong Kong, China. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics. ## A Grid Search To fine-tune LMs on QAG, one can use the GridSearcher class of lmqg, which performs LM fine-tuning with a two-stage optimization of hyperparameter, a set of parameters to be used at finetuning such as learning rate or batch size, as described in Figure 5. Let us assume that we want to find an optimal combination of the learning rate and random seed from a list of candidates [1e-4,1e-5] and [0,1] for learning rate and random seed respectively on QG as an example. We also assume a training and a validation dataset to train a model on the task and an evaluation score that reflects a performance of a model (eg. BLEU4(Papineni et al., 2002)), and we define a search-space as a set including all the combinations of those candidates, i.e. {(1e-4, 0), (1e-4, 1), (1e-5, 0), (1e-5, 1)}. The goal of the GridSearcher is to find the best combination to train a model on the training dataset for the target task over the search-space with respect to the evaluation score computed on the validation dataset. Brute-force approach such as to train model over every combination in the search-space can be a highly-inefficient, so GridSearcher employs a two-stage search method to avoid training for all the combinations, while being able to reach to the optimal combination as possible. To be precise, given an epoch size L (epoch), the first stage fine-tunes all the combinations over the search-space, and pauses fine-tuning at epoch M (epoch_partial). The top-K combinations (n_max_config) are then selected based on the evaluation score computed over the validation dataset, and they are resumed to be fine-tuned until the last epoch. Once the K chosen models are finetuned at second stage, the best model is selected based on the evaluation score, which is kept being fine-tuned until the evaluation score decreases. The dataset for training and validation can be any datasets shared in the HuggingFace Hub, and one can specify the input and the output to the model from the column of the dataset by the arguments input_types and output_types at instantiating GridSearcher. For example, the following code shows how we can fine-tune T5 (Raffel et al., 2020) on question generation, a sub-task of QAG, with SQuAD (Rajpurkar et al., 2016), where the dataset lmqg/qg_squad is shared at https://huggingface. co/datasets/lmqg/qg_squad on the HuggingFace Hub, which has columns of paragraph_answer, that contains a answer-highlighted paragraph, and question, which is a question corresponding to the answer highlighted in the paragraph_answer. We choose them as the input and the output to the model respectively by passing the name of each column to the arguments, input_types and output_types. from lmqg **import** GridSearcher ![8_image_0.png](8_image_0.png)